scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.

01 Jan 2016-Medical Image Analysis (Elsevier)-Vol. 27, pp 93-104
TL;DR: A graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions and outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.
About: This article is published in Medical Image Analysis.The article was published on 2016-01-01. It has received 150 citations till now. The article focuses on the topics: Iterative reconstruction & Wavelet transform.
Citations
More filters
Journal ArticleDOI
Lan Pu1, Zhang Jiang-tao1, Xia Kewen1, Zhou Qiao1, He Ziping1 
TL;DR: An improved algorithm replacing the fixed threshold selection with S-shaped function value in each iteration is proposed to overcome the shortcoming that the fixed thresholds parameter is selected in every iteration of SWOMP algorithm.
Abstract: One of the key technologies of compressed sensing is the signal reconstruction. And the two important indicators of signal reconstruction are the reconstruction probability and the time consumed. The Stagewise Weak Orthogonal Matching Pursuit (SWOMP) is widely used because the sparsity does not need to be a priori condition. The use of fixed threshold parameter in the iterative process can easily lead to overestimation and underestimation. Inspired by the idea of “the initial stage is approaching quickly and the final stage is approaching gradually,” that is, the search rule of “firstly fast and then slow,” an improved algorithm replacing the fixed threshold selection with S-shaped function value in each iteration is proposed to overcome the shortcoming that the fixed threshold parameter is selected in every iteration of SWOMP algorithm. Through compared experiment of six different S-shaped functions, the results show that the influence of different S-shaped functions on the SWOMP algorithm is different, and the improved SWOMP algorithm with the sixth S-shaped function has the best reconstruction effect.

2 citations

Journal ArticleDOI
TL;DR: In this paper , a deep auto-encoder was used to form a manifold of image patches of the patient and the trained manifold was then incorporated as a regularization to restore MR images of the same patient from undersampled data.
Abstract: Introduction Recent advancements in radiotherapy (RT) have allowed for the integration of a Magnetic Resonance (MR) imaging scanner with a medical linear accelerator to use MR images for image guidance to position tumors against the treatment beam. Undersampling in MR acquisition is desired to accelerate the imaging process, but unavoidably deteriorates the reconstructed image quality. In RT, a high-quality MR image of a patient is available for treatment planning. In light of this unique clinical scenario, we proposed to exploit the patient-specific image prior to facilitate high-quality MR image reconstruction. Methods Utilizing the planning MR image, we established a deep auto-encoder to form a manifold of image patches of the patient. The trained manifold was then incorporated as a regularization to restore MR images of the same patient from undersampled data. We performed a simulation study using a patient case, a real patient study with three liver cancer patient cases, and a phantom experimental study using data acquired on an in-house small animal MR scanner. We compared the performance of the proposed method with those of the Fourier transform method, a tight-frame based Compressive Sensing method, and a deep learning method with a patient-generic manifold as the image prior. Results In the simulation study with 12.5% radial undersampling and 15% increase in noise, our method improved peak-signal-to-noise ratio by 4.46dB and structural similarity index measure by 28% compared to the patient-generic manifold method. In the experimental study, our method outperformed others by producing reconstructions of visually improved image quality.

2 citations

Book ChapterDOI
03 Jun 2020
TL;DR: An adaptive, near-optimal, 3D to 1D ordering methodology for brain magnetic resonance imaging (MRI) data, using a space-filling curve (SFC) trajectory, which is adaptive to brain’s shape as captured by MRI, is developed.
Abstract: In this work, we develop an adaptive, near-optimal, 3-Dimensional (3D) to 1D ordering methodology for brain magnetic resonance imaging (MRI) data, using a space-filling curve (SFC) trajectory, which is adaptive to brain’s shape as captured by MRI. We present the pseudocode of the heuristics for developing the SFC trajectory. We apply this trajectory to functional MRI brain activation maps from a schizophrenia study, compress the data, obtain features, and perform classification of schizophrenia patients vs. normal controls. We compare the classification results with those of a linear ordering trajectory, which has been the traditional method for ordering 3D MRI data to 1D. We report that the adaptive SFC trajectory-based classification performance is superior than the linear ordering trajectory-based classification.

2 citations

Posted Content
TL;DR: In this paper, the authors propose a new framework for CS-MRI inversion in which they decompose the observed k-space data into subspaces via sets of filters in a lossless way, and reconstruct the images in these various spaces individually using off-the-shelf algorithms.
Abstract: Compressed sensing (CS) theory assures us that we can accurately reconstruct magnetic resonance images using fewer k-space measurements than the Nyquist sampling rate requires. In traditional CS-MRI inversion methods, the fact that the energy within the Fourier measurement domain is distributed non-uniformly is often neglected during reconstruction. As a result, more densely sampled low-frequency information tends to dominate penalization schemes for reconstructing MRI at the expense of high-frequency details. In this paper, we propose a new framework for CS-MRI inversion in which we decompose the observed k-space data into "subspaces" via sets of filters in a lossless way, and reconstruct the images in these various spaces individually using off-the-shelf algorithms. We then fuse the results to obtain the final reconstruction. In this way we are able to focus reconstruction on frequency information within the entire k-space more equally, preserving both high and low frequency details. We demonstrate that the proposed framework is competitive with state-of-the-art methods in CS-MRI in terms of quantitative performance, and often improves an algorithm's results qualitatively compared with it's direct application to k-space.

1 citations

DissertationDOI
03 Dec 2019
TL;DR: This dissertation aims to provide a history of web exceptionalism from 1989 to 2002, a period chosen in order to explore its roots as well as specific cases up to and including the year in which descriptions of “Web 2.0” began to circulate.
Abstract: ......................................................................................................................... I 摘要.............................................................................................................................. IV Acknowledgement ....................................................................................................... VI

1 citations


Cites background from "Image reconstruction of compressed ..."

  • ...Magnetic resonance imaging (MRI) has particularly benefitted from the application of CS [84] resulting in significantly improved scan times [85, 86] and the removal of artifacts and reconstruction errors [87]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations

01 Jan 2005

19,250 citations

Journal ArticleDOI
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data

8,905 citations


"Image reconstruction of compressed ..." refers methods in this paper

  • ...Assuming that image patches are linear combinations of element patches, Aharon et al. have used K-SVD to train a patch-based dictionary (Aharon et al., 2006; Ravishankar and Bresler, 2011)....

    [...]

Journal ArticleDOI
TL;DR: It is proved that replacing the usual quadratic regularizing penalties by weighted 𝓁p‐penalized penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem.
Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p-penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p-penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc.

4,339 citations


Additional excerpts

  • ...When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]

  • ...(6) When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]