scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.

01 Jan 2016-Medical Image Analysis (Elsevier)-Vol. 27, pp 93-104
TL;DR: A graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions and outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.
About: This article is published in Medical Image Analysis.The article was published on 2016-01-01. It has received 150 citations till now. The article focuses on the topics: Iterative reconstruction & Wavelet transform.
Citations
More filters
Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a dual-domain cascade network, which utilized partial convolutional layers to inpaint features in k-space, to restore the MR image without mask prior.

3 citations

Journal ArticleDOI
TL;DR: In this paper, the multivariate Gaussian scale mixture (GSM) model is developed to precisely characterize to the statistical properties of sparse coefficients of group formed by similar patches, and a Bayesian group sparse representation (BGSR) is derived from maximum a posterior (MAP) estimation.

3 citations

25 Apr 2016
TL;DR: This work discusses several approaches to establish thermal imaging as a novel neuroimaging technique to primarily visualize neural activity and perfusion state in case of ischaemic stroke, and finds that thermal imaging yields comparable results, yet doesn’t share the limitations of optical imaging.
Abstract: Neurosurgery is a demanding medical discipline that requires a complex interplay of several neuroimaging techniques. This allows structural as well as functional information to be recovered and then visualized to the surgeon. In the case of tumor resections this approach allows more fine-grained differentiation of healthy and pathological tissue which positively influences the postoperative outcome as well as the patient’s quality of life. In this work, we will discuss several approaches to establish thermal imaging as a novel neuroimaging technique to primarily visualize neural activity and perfusion state in case of ischaemic stroke. Both applications require novel methods for data-preprocessing, visualization, pattern recognition as well as regression analysis of intraoperative thermal imaging. Online multimodal integration of preoperative and intraoperative data is accomplished by a 2D-3D image registration and image fusion framework with an average accuracy of 2.46 mm. In navigated surgeries, the proposed framework generally provides all necessary tools to project intraoperative 2D imaging data onto preoperative 3D volumetric datasets like 3D MR or CT imaging. Additionally, a fast machine learning framework for the recognition of cortical NaCl rinsings will be discussed throughout this thesis. Hereby, the standardized quantification of tissue perfusion by means of an approximated heating model can be achieved. Classifying the parameters of these models yields a map of connected areas, for which we have shown that these areas correlate with the demarcation caused by an ischaemic stroke segmented in postoperative CT datasets. Finally, a semiparametric regression model has been developed for intraoperative neural activity monitoring of the somatosensory cortex by somatosensory evoked potentials. These results were correlated with neural activity of optical imaging. We found that thermal imaging yields comparable results, yet doesn’t share the limitations of optical imaging. In this thesis we would like to emphasize that thermal imaging depicts a novel and valid tool for both intraoperative functional and structural neuroimaging.

2 citations


Cites methods from "Image reconstruction of compressed ..."

  • ...Several wavelet-based methods with sparsity enforcing constraints were discussed in [51] given 20 % subsampled raw MRI recordings....

    [...]

Journal ArticleDOI
TL;DR: This review shows first how popular signal processing methods, such as basis pursuit and sparse coding, are related to analysis and synthesis and then explains how dictionary learning and deep learning using neural networks can also be interpreted as generalizedAnalysis and synthesis methods.
Abstract: Signal decomposition (analysis) and reconstruction (synthesis) are cornerstones in signal processing and feature recognition tasks. Signal decomposition is traditionally achieved by projecting data onto predefined basis functions, often known as atoms. Coefficient manipulation (e.g., thresholding) combined with signal reconstruction then either provides signals with enhanced quality or permits extraction of desired features only. More recently dictionary learning and deep learning have also been actively used for similar tasks. The purpose of dictionary learning is to derive the most appropriate basis functions directly from the observed data. In deep learning, neural networks or other transfer functions are taught to perform either feature classification or data enhancement directly, provided solely some training data. This review shows first how popular signal processing methods, such as basis pursuit and sparse coding, are related to analysis and synthesis. We then explain how dictionary learning and deep learning using neural networks can also be interpreted as generalized analysis and synthesis methods. We introduce the underlying principles of all techniques and then show their inherent strengths and weaknesses using various examples, including two toy examples, a moonscape image, a magnetic resonance image, and geophysical data.

2 citations

Journal ArticleDOI
TL;DR: In this paper , a similarity regularization of the co-supports across multi-contrasts was proposed to obtain MR images of higher quality within limited acquisition time, and an effective model to reconstruct images from under-sampled k-space data of one contrast by utilizing another fully sampled contrast of the same anatomy.
Abstract: Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical diagnosis. However, it is time-consuming to obtain MR data of multi-contrasts and the long scanning time may bring unexpected physiological motion artifacts. To obtain MR images of higher quality within limited acquisition time, we propose an effective model to reconstruct images from under-sampled k-space data of one contrast by utilizing another fully-sampled contrast of the same anatomy. Specifically, multiple contrasts from the same anatomical section exhibit similar structures. Enlightened by the fact that co-support of an image provides an appropriate characterization of morphological structures, we develop a similarity regularization of the co-supports across multi-contrasts. In this case, the guided MRI reconstruction problem is naturally formulated as a mixed integer optimization model consisting of three terms, the data fidelity of k-space, smoothness-enforcing regularization, and co-support regularization. An effective algorithm is developed to solve this minimization model alternatively. In the numerical experiments, T2-weighted images are used as the guidance to reconstruct T1-weighted/T2-weighted-Fluid-Attenuated Inversion Recovery (T2-FLAIR) images and PD-weighted images are used as the guidance to reconstruct PDFS-weighted images, respectively, from their under-sampled k-space data. The experimental results demonstrate that the proposed model outperforms other state-of-the-art multi-contrast MRI reconstruction methods in terms of both quantitative metrics and visual performance at various sampling ratios.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations

01 Jan 2005

19,250 citations

Journal ArticleDOI
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data

8,905 citations


"Image reconstruction of compressed ..." refers methods in this paper

  • ...Assuming that image patches are linear combinations of element patches, Aharon et al. have used K-SVD to train a patch-based dictionary (Aharon et al., 2006; Ravishankar and Bresler, 2011)....

    [...]

Journal ArticleDOI
TL;DR: It is proved that replacing the usual quadratic regularizing penalties by weighted 𝓁p‐penalized penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem.
Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p-penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p-penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc.

4,339 citations


Additional excerpts

  • ...When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]

  • ...(6) When β → +∞ , expression (6) approaches (5) (Daubechies et al., 2004; Junfeng et al., 2010)....

    [...]