Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.
...read more
Citations
More filters
[...]
TL;DR: This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets.
Abstract: Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist–Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.
537 citations
Cites background from "Image reconstruction of compressed ..."
[...]
[...]
TL;DR: Two versions of a novel deep learning architecture are proposed, dubbed as ADMM-CSNet, by combining the traditional model-based CS method and data-driven deep learning method for image reconstruction from sparsely sampled measurements, which achieved favorable reconstruction accuracy in fast computational speed compared with the traditional and the other deep learning methods.
Abstract: Compressive sensing (CS) is an effective technique for reconstructing image from a small amount of sampled data. It has been widely applied in medical imaging, remote sensing, image compression, etc. In this paper, we propose two versions of a novel deep learning architecture, dubbed as ADMM-CSNet, by combining the traditional model-based CS method and data-driven deep learning method for image reconstruction from sparsely sampled measurements. We first consider a generalized CS model for image reconstruction with undetermined regularizations in undetermined transform domains, and then two efficient solvers using Alternating Direction Method of Multipliers (ADMM) algorithm for optimizing the model are proposed. We further unroll and generalize the ADMM algorithm to be two deep architectures, in which all parameters of the CS model and the ADMM algorithm are discriminatively learned by end-to-end training. For both applications of fast CS complex-valued MR imaging and CS imaging of real-valued natural images, the proposed ADMM-CSNet achieved favorable reconstruction accuracy in fast computational speed compared with the traditional and the other deep learning methods.
173 citations
[...]
TL;DR: Wang et al. as discussed by the authors proposed a projected iterative soft thresholding algorithm (pISTA) and its acceleration pFISTA for CS-MRI image reconstruction, which exploit sparsity of the magnetic resonance (MR) images under the redundant representation of tight frames.
Abstract: Compressed sensing (CS) has exhibited great potential for accelerating magnetic resonance imaging (MRI). In CS-MRI, we want to reconstruct a high-quality image from very few samples in a short time. In this paper, we propose a fast algorithm, called projected iterative soft-thresholding algorithm (pISTA), and its acceleration pFISTA for CS-MRI image reconstruction. The proposed algorithms exploit sparsity of the magnetic resonance (MR) images under the redundant representation of tight frames. We prove that pISTA and pFISTA converge to a minimizer of a convex function with a balanced tight frame sparsity formulation. The pFISTA introduces only one adjustable parameter, the step size, and we provide an explicit rule to set this parameter. Numerical experiment results demonstrate that pFISTA leads to faster convergence speeds than the state-of-art counterpart does, while achieving comparable reconstruction errors. Moreover, reconstruction errors incurred by pFISTA appear insensitive to the step size.
93 citations
[...]
TL;DR: Experimental results on simulated and real magnetic resonance spectroscopy data show that the proposed approach can successfully recover full signals from very limited samples and is robust to the estimated tensor rank.
Abstract: Signals are generally modeled as a superposition of exponential functions in spectroscopy of chemistry, biology, and medical imaging. For fast data acquisition or other inevitable reasons, however, only a small amount of samples may be acquired, and thus, how to recover the full signal becomes an active research topic, but existing approaches cannot efficiently recover $N$ -dimensional exponential signals with $N\geq 3$ . In this paper, we study the problem of recovering $N$ -dimensional (particularly $N\geq 3$ ) exponential signals from partial observations, and formulate this problem as a low-rank tensor completion problem with exponential factor vectors. The full signal is reconstructed by simultaneously exploiting the CANDECOMP/PARAFAC tensor structure and the exponential structure of the associated factor vectors. The latter is promoted by minimizing an objective function involving the nuclear norm of Hankel matrices. Experimental results on simulated and real magnetic resonance spectroscopy data show that the proposed approach can successfully recover full signals from very limited samples and is robust to the estimated tensor rank.
66 citations
Proceedings Article•
[...]
TL;DR: This work proposes a recursive dilated network (RDN) for CS-MRI that achieves good performance while reducing the number of network parameters, and adopts dilated convolutions in each recursive block to aggregate multi-scale information within the MRI.
Abstract: Compressed sensing magnetic resonance imaging (CS-MRI) is an active research topic in the field of inverse problems. Conventional CS-MRI algorithms usually exploit the sparse nature of MRI in an iterative manner. These optimization-based CS-MRI methods are often time-consuming at test time, and are based on fixed transform bases or shallow dictionaries, which limits modeling capacity. Recently, deep models have been introduced to the CS-MRI problem. One main challenge for CS-MRI methods based on deep learning is the trade off between model performance and network size. We propose a recursive dilated network (RDN) for CS-MRI that achieves good performance while reducing the number of network parameters. We adopt dilated convolutions in each recursive block to aggregate multi-scale information within the MRI. We also adopt a modified shortcut strategy to help features flow into deeper layers. Experimental results show that the proposed RDN model achieves state-of-the-art performance in CS-MRI while using far fewer parameters than previously required.
49 citations
Cites methods from "Image reconstruction of compressed ..."
[...]
[...]
[...]
References
More filters
[...]
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.
30,333 citations
Book•
[...]
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher:
The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects.
In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage.
As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds.
Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.
21,642 citations
[...]
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data
8,149 citations
"Image reconstruction of compressed ..." refers methods in this paper
[...]
[...]
TL;DR: It is proved that replacing the usual quadratic regularizing penalties by weighted 𝓁p‐penalized penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem.
Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary preassigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted p-penalties on the coefficients of such expansions, with 1 ≤ p ≤ 2, still regularizes the problem. Use of such p-penalized problems with p < 2 is often advocated when one expects the underlying ideal noiseless solution to have a sparse expansion with respect to the basis under consideration. To compute the corresponding regularized solutions, we analyze an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. © 2004 Wiley Periodicals, Inc.
3,809 citations
Additional excerpts
[...]
[...]
Related Papers (5)
[...]