scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.

01 Jan 2016-Medical Image Analysis (Elsevier)-Vol. 27, pp 93-104
TL;DR: A graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions and outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.
About: This article is published in Medical Image Analysis.The article was published on 2016-01-01. It has received 150 citations till now. The article focuses on the topics: Iterative reconstruction & Wavelet transform.
Citations
More filters
Proceedings ArticleDOI
TL;DR: Wang et al. as mentioned in this paper propose a deep geometric distillation network which combines the merits of model-based and deep learning-based CS-MRI methods, it can be theoretically guaranteed to improve geometric texture details of a linear reconstruction.
Abstract: Compressed sensing (CS) is an efficient method to reconstruct MR image from small sampled data in $k$-space and accelerate the acquisition of MRI. In this work, we propose a novel deep geometric distillation network which combines the merits of model-based and deep learning-based CS-MRI methods, it can be theoretically guaranteed to improve geometric texture details of a linear reconstruction. Firstly, we unfold the model-based CS-MRI optimization problem into two sub-problems that consist of image linear approximation and image geometric compensation. Secondly, geometric compensation sub-problem for distilling lost texture details in approximation stage can be expanded by Taylor expansion to design a geometric distillation module fusing features of different geometric characteristic domains. Additionally, we use a learnable version with adaptive initialization of the step-length parameter, which allows model more flexibility that can lead to convergent smoothly. Numerical experiments verify its superiority over other state-of-the-art CS-MRI reconstruction approaches. The source code will be available at \url{this https URL}

4 citations

Journal ArticleDOI
TL;DR: This work proposes to implement a K sparse autoencoder model for reconstruction of MR image from undersampled k-space data, and implements the cascaded form of reconstruction, incorporating three K sparseautoencoders with three different K values.
Abstract: Owing to the sequential collection of phase encoded data in k-space, magnetic resonance (MR) imaging suffers from long acquisition time. One possible measure to reduce the long acquisition time is to reconstruct MR image using a subset of k-space MR data rather than the complete set. In this work, we propose to implement a K sparse autoencoder model for reconstruction of MR image from undersampled k-space data. Autoencoder models, which have shown great ability in capturing the complex features of input data, can be used to reconstruct high-quality MR image. The reconstruction process involved solving an optimization problem whose solution was expected to satisfy the data consistency and also lie in close proximity to the output space of trained K sparse autoencoder. Observing the effect of sparsity value enforced by K sparse autoencoder on the reconstructed output, we implemented the cascaded form of reconstruction, incorporating three K sparse autoencoders with three different K values. Using MR-PD and MR-T1 images, reconstruction performance of the proposed approach was compared with those of the conventional reconstruction approaches. The quantitative as well as the qualitative analysis of the reconstructed images, obtained using the proposed approach, validates the efficiency of the proposed approach.

4 citations

Journal ArticleDOI
TL;DR: Empirical analysis shows the proposed bi-dimensional local mean decomposition (BLMD) can quickly decompose and maintain the characteristics of data-drivenness, adaptability and scale consistency of LMD and avoid the disadvantages of other adaptive processing algorithms.
Abstract: Because images contain rich characteristic information, adaptive image decomposition algorithms are necessary to achieve multi-scale extraction of image information in multi-scale image decomposition processing. For this reason, based on local mean decomposition (LMD), which has good self-adaptive characteristics, this paper proposes a new adaptive image processing algorithm, bi-dimensional local mean decomposition (BLMD). BLMD can decompose the original image into multiple bi-dimensional product functions (BPFs). Aiming at the decomposition of BLMD, this paper proposes targeted solutions and designs for the extraction of extremum points, screening process interpolation methods, and decomposition and stop conditions involved in BLMD. After fully recognizing the self-adaptive and multi-scale characteristics of BLMD, this paper proposes a variable neighborhood window method to obtain the extreme points in the decomposition process and uses fractal theory to interpolate the image and obtain the corresponding mean surface and other information. Then, the number of non-coincident extreme points on the zero-valued plane projection between adjacent surfaces in the screening process is counted and analyzed, and a stop condition that matches the characteristics of the image is given to ensure the BPF component obtained by decomposition accurately reflects certain feature information of an image. Finally, the BLMD proposed in this paper is formed. Empirical analysis shows this method can quickly decompose and maintain the characteristics of data-drivenness, adaptability and scale consistency of LMD; it can also avoid the disadvantages of other adaptive processing algorithms, such as the bi-dimensional intrinsic mode function obtained by the decomposition of bi-dimensional empirical mode decomposition and the residual failing to completely contain the feature information of the original image.

4 citations

Journal ArticleDOI
TL;DR: A novel, locally statistical active contour model (ACM) for magnetic resonance image segmentation in the presence of intense inhomogeneity with the ability to determine the position of contour and energy diagram is introduced.
Abstract: Introduction Brain image segmentation is one of the most important clinical tools used in radiology and radiotherapy. But accurate segmentation is a very difficult task because these images mostly contain noise, inhomogeneities, and sometimes aberrations. The purpose of this study was to introduce a novel, locally statistical active contour model (ACM) for magnetic resonance image segmentation in the presence of intense inhomogeneity with the ability to determine the position of contour and energy diagram. Methods A Gaussian distribution model with different means and variances was used for inhomogeneity, and a moving window was used to map the original image into another domain in which the intensity distributions of inhomogeneous objects were still Gaussian but were better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field by the original signal within the window. Then, a statistical energy function is defined for each local region. Also, to evaluate the performance of our method, experiments were conducted on MR images of the brain for segment tumors or normal tissue as visualization and energy functions. Results In the proposed method, we were able to determine the size and position of the initial contour and to count iterations to have a better segmentation. The energy function for 20 to 430 iterations was calculated. The energy function was reduced by about 5 and 7% after 70 and 430 iterations, respectively. These results showed that, with increasing iterations, the energy function decreased, but it decreased faster during the early iterations, after which it decreased slowly. Also, this method enables us to stop the segmentation based on the threshold that we define for the energy equation. Conclusion An active contour model based on the energy function is a useful tool for medical image segmentation. The proposed method combined the information about neighboring pixels that belonged to the same class, thereby making it strong to separate the desired objects from the background.

4 citations


Cites methods from "Image reconstruction of compressed ..."

  • ...Page 2444 Many promising methods have been proposed for image segmentation, such as region merging-based methods (6-7), graph-based methods (8-11), and active contour model (ACM)-based methods (12, 13)....

    [...]

  • ...,=G and KL3 | ! M ∏ 3 | ! I!J" ∏ ∏ | ! *6 &∩89 I!J" (9)...

    [...]

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a new parallel CS reconstruction model for DCE-MRI that enforces flexible weighted sparse constraint along both spatial and temporal dimensions, and derived a fast thresholding algorithm which was proven to be simple and efficient for solving the proposed reconstruction model.
Abstract: Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a tissue perfusion imaging technique. Some versatile free-breathing DCE-MRI techniques combining compressed sensing (CS) and parallel imaging with golden-angle radial sampling have been developed to improve motion robustness with high spatial and temporal resolution. These methods have demonstrated good diagnostic performance in clinical setting, but the reconstruction quality will degrade at high acceleration rates and overall reconstruction time remains long. In this paper, we proposed a new parallel CS reconstruction model for DCE-MRI that enforces flexible weighted sparse constraint along both spatial and temporal dimensions. Weights were introduced to flexibly adjust the importance of time and space sparsity, and we derived a fast-thresholding algorithm which was proven to be simple and efficient for solving the proposed reconstruction model. Results on both the brain tumor DCE and liver DCE show that, at relatively high acceleration factor of fast sampling, lowest reconstruction error and highest image structural similarity are obtained by the proposed method. Besides, the proposed method achieves faster reconstruction for liver datasets and better physiological measures are also obtained on tumor images.

4 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations

01 Jan 2005

19,250 citations

Journal ArticleDOI
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data

8,905 citations


"Image reconstruction of compressed ..." refers methods in this paper

  • ...Assuming that image patches are linear combinations of element patches, Aharon et al. have used K-SVD to train a patch-based dictionary (Aharon et al., 2006; Ravishankar and Bresler, 2011)....

    [...]

Journal ArticleDOI
TL;DR: It is proved that replacing the usual quadratic regularizing penalties by weighted 𝓁p‐penalized penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem.
Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p-penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p-penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc.

4,339 citations


Additional excerpts

  • ...When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]

  • ...(6) When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]