scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.

01 Jan 2016-Medical Image Analysis (Elsevier)-Vol. 27, pp 93-104
TL;DR: A graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions and outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.
About: This article is published in Medical Image Analysis.The article was published on 2016-01-01. It has received 150 citations till now. The article focuses on the topics: Iterative reconstruction & Wavelet transform.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper , a novel compressed sensing method for the reconstruction of medical images is proposed, the image edges are well preserved with the proposed reweighted total variation (TV) and non-local self-similarity (NSS).

3 citations

Journal ArticleDOI
TL;DR: In this article, a novel compressed sensing method for the reconstruction of medical images is proposed, the image edges are well preserved with the proposed reweighted total variation (TV) and non-local self-similarity (NSS).

3 citations

Posted Content
TL;DR: The proposed model significantly outperforms the existing CS-MRI reconstruction techniques in terms of peak signal-to-noise ratio as well as structural similarity index, and uses significantly fewer trainable parameters to do so, as compared to the real-valued deep learning based methods.
Abstract: Compressive sensing (CS) is widely used to reduce the acquisition time of magnetic resonance imaging (MRI). Although state-of-the-art deep learning based methods have been able to obtain fast, high-quality reconstruction of CS-MR images, their main drawback is that they treat complex-valued MRI data as real-valued entities. Most methods either extract the magnitude from the complex-valued entities or concatenate them as two real-valued channels. In both the cases, the phase content, which links the real and imaginary parts of the complex-valued entities, is discarded. In order to address the fundamental problem of real-valued deep networks, i.e. their inability to process complex-valued data, we propose a novel framework based on a complex-valued generative adversarial network (Co-VeGAN). Our model can process complex-valued input, which enables it to perform high-quality reconstruction of the CS-MR images. Further, considering that phase is a crucial component of complex-valued entities, we propose a novel complex-valued activation function, which is sensitive to the phase of the input. Extensive evaluation of the proposed approach on different datasets using various sampling masks demonstrates that the proposed model significantly outperforms the existing CS-MRI reconstruction techniques in terms of peak signal-to-noise ratio as well as structural similarity index. Further, it uses significantly fewer trainable parameters to do so, as compared to the real-valued deep learning based methods.

3 citations


Cites background from "Image reconstruction of compressed ..."

  • ...r knowledge on the structure of the MR image to be reconstructed. Sparse representations can be explored by the use of predefined transforms [2] such as total variation [3], discrete wavelet transform [4], etc. Alternatively, dictionary learning based methods [5] learn sparse representations from the subspace spanned by the data. Both these types of approaches suffer from long computation time due to ...

    [...]

Proceedings ArticleDOI
Zhenyu Hu1, Qiuye Wang1, Congcong Ming1, Lai Wang1, Yuanqing Hu1, Jian Zou1 
01 Dec 2015
TL;DR: A new CS-MRI reconstruction algorithm based on contourlet transform and split Bregman method is presented, which not only enforces the curve sparsity of MR images with fast computation, but also outperforms on reconstruction accuracy.
Abstract: Compressed sensing (CS) based methods have recently been used to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. In traditional CS-MRI, wavelet transform can hardly capture the information of image curves and edges. In this paper, we present a new CS-MRI reconstruction algorithm based on contourlet transform and split Bregman method. Contrast with wavelet based algorithms, the proposed method not only enforces the curve sparsity of MR images with fast computation, but also outperforms on reconstruction accuracy. Numerical results show the effectiveness of the proposed algorithm.

3 citations


Cites methods from "Image reconstruction of compressed ..."

  • ...In traditional CS-MRI, wavelet transform is commonly used as a sparse transform [6], [7]....

    [...]

Posted Content
TL;DR: This article provides a deep learning (DL) strategy for BCS, called AutoBCS, which automatically takes the prior knowledge of images into account in the acquisition step and establishes a reconstruction model for performing fast image reconstruction.
Abstract: Block compressive sensing is a well-known signal acquisition and reconstruction paradigm with widespread application prospects in science, engineering and cybernetic systems. However, state-of-the-art block-based image compressive sensing (BCS) methods generally suffer from two issues. The sparsifying domain and the sensing matrices widely used for image acquisition are not data-driven, and thus both the features of the image and the relationships among subblock images are ignored. Moreover, doing so requires addressing high-dimensional optimization problems with extensive computational complexity for image reconstruction. In this paper, we provide a deep learning strategy for BCS, called AutoBCS, which takes the prior knowledge of images into account in the acquisition step and establishes a subsequent reconstruction model for performing fast image reconstruction with a low computational cost. More precisely, we present a learning-based sensing matrix (LSM) derived from training data to accomplish image acquisition, thereby capturing and preserving more image characteristics than those captured by existing methods. In particular, the generated LSM is proven to satisfy the theoretical requirements of compressive sensing, such as the so-called restricted isometry property. Additionally, we build a noniterative reconstruction network, which provides an end-to-end BCS reconstruction framework to eliminate blocking artifacts and maximize image reconstruction accuracy, in our AutoBCS architecture. Furthermore, we investigate comprehensive comparison studies with both traditional BCS approaches and newly developed deep learning methods. Compared with these approaches, our AutoBCS framework can not only provide superior performance in terms of image quality metrics (SSIM and PSNR) and visual perception, but also automatically benefit reconstruction speed.

3 citations


Cites methods from "Image reconstruction of compressed ..."

  • ...To capture such dependencies, various specialized and sophisticated regularizations have been exploited for CS with images, most remarkably, attribute correlation learning [11–13], group/structured sparsity [14], Bayesian/model-based sparsity [15], low-rank regularization [16], and nonlocal sparsity [17]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations

01 Jan 2005

19,250 citations

Journal ArticleDOI
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data

8,905 citations


"Image reconstruction of compressed ..." refers methods in this paper

  • ...Assuming that image patches are linear combinations of element patches, Aharon et al. have used K-SVD to train a patch-based dictionary (Aharon et al., 2006; Ravishankar and Bresler, 2011)....

    [...]

Journal ArticleDOI
TL;DR: It is proved that replacing the usual quadratic regularizing penalties by weighted 𝓁p‐penalized penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem.
Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p-penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p-penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc.

4,339 citations


Additional excerpts

  • ...When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]

  • ...(6) When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]