scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 2012"


Journal ArticleDOI
TL;DR: This review strives to provide information on IR methods and aims at interested physicists and physicians already active in the field of CT and gives an overview on the terminology used and an introduction to the most important algorithmic concepts.

684 citations


Journal ArticleDOI
TL;DR: Methods for reducing noise and out-of-field artifacts may enable ultra-high resolution limited field of view imaging of tumors and other structures and result in a more accurate diagnosis.
Abstract: Artifacts are commonly encountered in clinical CT and may obscure or simulate pathology. There are many different types of CT artifacts, including noise, beam hardening, scatter, pseudoenhancement, motion, cone-beam, helical, ring and metal artifacts. We review the cause and appearance of each type of artifact, correct some popular misconceptions and describe modern techniques for artifact reduction. Noise can be reduced using iterative reconstruction or by combining data from multiple scans. This enables lower radiation dose and higher resolution scans. Metal artifacts can also be reduced using iterative reconstruction, resulting in a more accurate diagnosis. Dual- and multi-energy (photon counting) CT can reduce beam hardening and provide better tissue contrast. Methods for reducing noise and out-of-field artifacts may enable ultra-high resolution limited field of view imaging of tumors and other structures.

658 citations


Journal ArticleDOI
TL;DR: The results show that the proposed approach might produce better images with lower noise and more detailed structural features in the authors' selected cases, however, there is no proof that this is true for all kinds of structures.
Abstract: Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures.

603 citations


Journal ArticleDOI
TL;DR: Simulation experiments show that the decoupled algorithm derived from the GNE formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3D-frames as an advanced image modeling tool.
Abstract: A family of the block matching 3-D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patchwise image modeling , . In this paper, we construct analysis and synthesis frames, formalizing BM3D image modeling, and use these frames to develop novel iterative deblurring algorithms. We consider two different formulations of the deblurring problem, i.e., one given by the minimization of the single-objective function and another based on the generalized Nash equilibrium (GNE) balance of two objective functions. The latter results in the algorithm where deblurring and denoising operations are decoupled. The convergence of the developed algorithms is proved. Simulation experiments show that the decoupled algorithm derived from the GNE formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3D-frames as an advanced image modeling tool.

550 citations


Journal ArticleDOI
TL;DR: This work addresses traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing, and incorporates Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework.
Abstract: Portable light field (LF) cameras have demonstrated capabilities beyond conventional cameras. In a single snapshot, they enable digital image refocusing and 3D reconstruction. We show that they obtain a larger depth of field but maintain the ability to reconstruct detail at high resolution. In fact, all depths are approximately focused, except for a thin slab where blur size is bounded, i.e., their depth of field is essentially inverted compared to regular cameras. Crucial to their success is the way they sample the LF, trading off spatial versus angular resolution, and how aliasing affects the LF. We show that applying traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing. We address these challenges using an explicit image formation model, and incorporate Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework, eliminating aliasing by fusing multiview information. We demonstrate the method on synthetic and real images captured with our LF camera, and show that it can outperform other computational camera systems.

434 citations


Journal ArticleDOI
TL;DR: STIR, an Open Source object-oriented library in C++ for 3D PET reconstruction, is presented, which enhances its flexibility and modular design, but also adds extra capabilities such as list mode reconstruction, more data formats etc.
Abstract: We present a new version of STIR (Software for Tomographic Image Reconstruction), an open source object-oriented library implemented in C++ for 3D positron emission tomography reconstruction. This library has been designed such that it can be used for many algorithms and scanner geometries, while being portable to various computing platforms. This second release enhances its flexibility and modular design and includes additional features such as Compton scatter simulation, an additional iterative reconstruction algorithm and parametric image reconstruction (both indirect and direct). We discuss the new features in this release and present example results. STIR can be downloaded from http://stir.sourceforge.net.

399 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: A novel paradigm to deal with depth reconstruction from 4D light fields in a variational framework is presented, taking into account the special structure of light field data, and reformulate the problem of stereo matching to a constrained labeling problem on epipolar plane images.
Abstract: We present a novel paradigm to deal with depth reconstruction from 4D light fields in a variational framework. Taking into account the special structure of light field data, we reformulate the problem of stereo matching to a constrained labeling problem on epipolar plane images, which can be thought of as vertical and horizontal 2D cuts through the field. This alternative formulation allows to estimate accurate depth values even for specular surfaces, while simultaneously taking into account global visibility constraints in order to obtain consistent depth maps for all views. The resulting optimization problems are solved with state-of-the-art convex relaxation techniques. We test our algorithm on a number of synthetic and real-world examples captured with a light field gantry and a plenoptic camera, and compare to ground truth where available. All data sets as well as source code are provided online for additional evaluation.

385 citations


Journal ArticleDOI
TL;DR: It is shown that data acquisition times for Q-ball and diffusion spectrum imaging (DSI) can be reduced 3-fold with a minor loss in SNR and with similar diffusion results compared to conventional acquisitions.

362 citations


Journal ArticleDOI
TL;DR: The primal-dual optimization algorithm developed in Chambolle and Pock (CP) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT.
Abstract: The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity x-ray illumination is presented.

334 citations


Journal ArticleDOI
Samuel Richard1, Daniela B. Husarik1, G Yadava2, S Murphy1, Ehsan Samei1 
TL;DR: This approach provides a method for assessing the task-based MTF of a CT system using conventional and iterative reconstructions and demonstrated that the object-specific MTF can vary as a function of dose and contrast.
Abstract: Purpose: To investigate a measurement method for evaluating the resolution properties of CTimaging systems across reconstruction algorithms, dose, and contrast. Methods: An algorithm was developed to extract the task-based modulation transfer function(MTF) from disk images generated from the rod inserts in the ACR phantom (model 464 Gammex, WI). These inserts are conventionally employed for HU accuracy assessment. The edge of the disk objects was analyzed to determine the edge-spread function, which was differentiated to yield the line-spread function and Fourier-transformed to generate the object-specific MTF for task-based assessment, denoted MTFTask. The proposed MTFmeasurement method was validated against the conventional wire technique and further applied to measure the MTF of CTimagesreconstructed with an adaptive statistical iterative algorithm (ASIR) and a model-based iterative (MBIR) algorithm. Results were further compared to the standard filtered back projection (FBP) algorithm. Measurements were performed and compared across different doses and contrast levels to ascertain the MTFTask dependencies on those factors. Results: For the FBP reconstructed images, the MTFTaskmeasured with the inserts were the same as the MTFmeasured from the wire-based method. For the ASIR and MBIR data, the MTFTask using the high contrast insert was similar to the wire-based MTF and equal or superior to that of FBP. However, results for the MTFTaskmeasured using the low-contrast inserts, the MTFTask for ASIR and MBIR data was lower than for the FBP, which was constant throughout all measurements. Similarly, as a function of mA, the MTFTask for ASIR and MBIR varied as a function of noise–-with MTFTask being proportional to mA. Overall greater variability of MTFTask across dose and contrast was observed for MBIR than for ASIR. Conclusions: This approach provides a method for assessing the task-based MTF of a CT system using conventional and iterative reconstructions. Results demonstrated that the object-specific MTF can vary as a function of dose and contrast. The analysis highlighted the paradigm shift for iterative reconstructions when compared to FBP, where iterative reconstructions generally offer superior noise performance but with varying resolution as a function of dose and contrast. The MTFTask generated by this method is expected to provide a more comprehensive assessment of image resolution across different reconstruction algorithms and imaging tasks.

329 citations


Journal ArticleDOI
TL;DR: A MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems based on the discrepancy principle, the monotone error rule, and the NCP criterion is presented.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the application of maximum-likelihood (ML) principles to the image reconstruction problem in coherent diffractive imaging, and describe an implementation of the optimization procedure for ptychography, using conjugate gradients.
Abstract: We introduce the application of maximum-likelihood (ML) principles to the image reconstruction problem in coherent diffractive imaging. We describe an implementation of the optimization procedure for ptychography, using conjugate gradients and including preconditioning strategies, regularization and typical modifications of the statistical noise model. The optimization principle is compared to a difference map reconstruction algorithm. With simulated data important improvements are observed, as measured by a strong increase in the signal-to-noise ratio. Significant gains in resolution and sensitivity are also demonstrated in the ML refinement of a reconstruction from experimental x-ray data. The immediate consequence of our results is the possible reduction of exposure, or dose, by up to an order of magnitude for a reconstruction quality similar to iterative algorithms currently in use.

Journal ArticleDOI
TL;DR: A sparse neighbor selection scheme for SR reconstruction is proposed that can achieve competitive SR quality compared with other state-of-the-art baselines and develop an extended Robust-SL0 algorithm to simultaneously find the neighbors and to solve the reconstruction weights.
Abstract: Until now, neighbor-embedding-based (NE) algorithms for super-resolution (SR) have carried out two independent processes to synthesize high-resolution (HR) image patches. In the first process, neighbor search is performed using the Euclidean distance metric, and in the second process, the optimal weights are determined by solving a constrained least squares problem. However, the separate processes are not optimal. In this paper, we propose a sparse neighbor selection scheme for SR reconstruction. We first predetermine a larger number of neighbors as potential candidates and develop an extended Robust-SL0 algorithm to simultaneously find the neighbors and to solve the reconstruction weights. Recognizing that the k-nearest neighbor (k-NN) for reconstruction should have similar local geometric structures based on clustering, we employ a local statistical feature, namely histograms of oriented gradients (HoG) of low-resolution (LR) image patches, to perform such clustering. By conveying local structural information of HoG in the synthesis stage, the k-NN of each LR input patch is adaptively chosen from their associated subset, which significantly improves the speed of synthesizing the HR image while preserving the quality of reconstruction. Experimental results suggest that the proposed method can achieve competitive SR quality compared with other state-of-the-art baselines.

Proceedings ArticleDOI
13 Oct 2012
TL;DR: In this article, an empirically derived noise model for the Kinect sensor is presented, where both lateral and axial noise distributions are measured as a function of both distance and angle of the Kinect to an observed surface.
Abstract: We contribute an empirically derived noise model for the Kinect sensor. We systematically measure both lateral and axial noise distributions, as a function of both distance and angle of the Kinect to an observed surface. The derived noise model can be used to filter Kinect depth maps for a variety of applications. Our second contribution applies our derived noise model to the KinectFusion system to extend filtering, volumetric fusion, and pose estimation within the pipeline. Qualitative results show our method allows reconstruction of finer details and the ability to reconstruct smaller objects and thinner surfaces. Quantitative results also show our method improves pose estimation accuracy.

Journal ArticleDOI
TL;DR: MBIR shows great potential for substantially reducing radiation doses at routine abdominal CT and both FBP and ASIR are limited in this regard owing to reduced image quality and diagnostic capability.
Abstract: OBJECTIVE. The purpose of this study was to report preliminary results of an ongoing prospective trial of ultralow-dose abdominal MDCT. SUBJECTS AND METHODS. Imaging with standard-dose contrast-enhanced (n = 21) and unenhanced (n = 24) clinical abdominal MDCT protocols was immediately followed by ultralow-dose imaging of a matched series of 45 consecutively registered adults (mean age, 57.9 years; mean body mass index, 28.5). The ultralow-dose images were reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR). Standard-dose series were reconstructed with FBP (reference standard). Image noise was measured at multiple predefined sites. Two blinded abdominal radiologists interpreted randomly presented ultralow-dose images for multilevel subjective image quality (5-point scale) and depiction of organ-based focal lesions. RESULTS. Mean dose reduction relative to the standard series was 74% (median, 78%; range, 57–...

Journal ArticleDOI
TL;DR: This work presents l1 -SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes, and proposes a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain.
Abstract: We present l1 -SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l1-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l1 -SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l1 -SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

Journal ArticleDOI
TL;DR: The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually.
Abstract: Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled ( k,t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method.

Journal ArticleDOI
TL;DR: Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR, and model-based iterative reconstruction (MBIR) significantly improves image noise and artefacts over adaptive statistical iterative techniques.
Abstract: Objectives To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR).

Journal ArticleDOI
TL;DR: Numerical experiments with synthetic and real in vivo human data illustrate that cone-filter preconditioners accelerate the proposed ADMM resulting in fast convergence of ADMM compared to conventional and state-of-the-art algorithms that are applicable for CT.
Abstract: Statistical image reconstruction using penalized weighted least-squares (PWLS) criteria can improve image-quality in X-ray computed tomography (CT). However, the huge dynamic range of the statistical weights leads to a highly shift-variant inverse problem making it difficult to precondition and accelerate existing iterative algorithms that attack the statistical model directly. We propose to alleviate the problem by using a variable-splitting scheme that separates the shift-variant and ("nearly") invariant components of the statistical data model and also decouples the regularization term. This leads to an equivalent constrained problem that we tackle using the classical method-of-multipliers framework with alternating minimization. The specific form of our splitting yields an alternating direction method of multipliers (ADMM) algorithm with an inner-step involving a "nearly" shift-invariant linear system that is suitable for FFT-based preconditioning using cone-type filters. The proposed method can efficiently handle a variety of convex regularization criteria including smooth edge-preserving regularizers and non- smooth sparsity-promoting ones based on the l1-norm and total variation. Numerical experiments with synthetic and real in vivo human data illustrate that cone-filter preconditioners accelerate the proposed ADMM resulting in fast convergence of ADMM compared to conventional (nonlinear conjugate gradient, ordered subsets) and state-of-the-art (MFISTA, split-Bregman) algorithms that are applicable for CT.

Journal ArticleDOI
TL;DR: An adaptive-weighted TV (AwTV) minimization algorithm is presented that can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.
Abstract: Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

Journal ArticleDOI
TL;DR: This study showed that Adaptive Iterative Dose Reduction (AIDR) reduces image noise in a phantom and in patients, quantitative and subjective evaluation showed that image noise was significantly lower with AIDR than with FBP.
Abstract: To evaluate the impact of Adaptive Iterative Dose Reduction (AIDR) on image quality and radiation dose in phantom and patient studies. A phantom was examined in volumetric mode on a 320-detector CT at different tube currents from 25 to 550 mAs. CT images were reconstructed with AIDR and with Filtered Back Projection (FBP) reconstruction algorithm. Image noise, Contrast-to-Noise Ratio (CNR), Signal-to-Noise Ratio (SNR) and spatial resolution were compared between FBP and AIDR images. AIDR was then tested on 15 CT examinations of the lumbar spine in a prospective study. Again, FBP and AIDR images were compared. Image noise and SNR were analysed using a Wilcoxon signed-rank test. In the phantom, spatial resolution assessment showed no significant difference between FBP and AIDR reconstructions. Image noise was lower with AIDR than with FBP images with a mean reduction of 40%. CNR and SNR were also improved with AIDR. In patients, quantitative and subjective evaluation showed that image noise was significantly lower with AIDR than with FBP. SNR was also greater with AIDR than with FBP. Compared to traditional FBP reconstruction techniques, AIDR significantly improves image quality and has the potential to decrease radiation dose.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This work proposes a patch based approach, where it is shown that the light field patches with the same disparity value lie on a low-dimensional subspace and that the dimensionality of such subspaces varies quadratically with the disparity value.
Abstract: With the recent availability of commercial light field cameras, we can foresee a future in which light field signals will be as common place as images. Hence, there is an imminent need to address the problem of light field processing. We provide a common framework for addressing many of the light field processing tasks, such as denoising, angular and spatial superresolution, etc. (in essence, all processing tasks whose observation models are linear). We propose a patch based approach, where we model the light field patches using a Gaussian mixture model (GMM). We use the ”disparity pattern” of the light field data to design the patch prior. We show that the light field patches with the same disparity value (i.e., at the same depth from the focal plane) lie on a low-dimensional subspace and that the dimensionality of such subspaces varies quadratically with the disparity value. We then model the patches as Gaussian random variables conditioned on its disparity value, thus, effectively leading to a GMM model. During inference, we first find the disparity value of a patch by a fast subspace projection technique and then reconstruct it using the LMMSE algorithm. With this prior and inference algorithm, we show that we can perform many different processing tasks under a common framework.

Journal ArticleDOI
TL;DR: It is shown that in time-of-flight PET, the attenuation sinogram is determined by the emission data except for a constant and that its gradient can be estimated efficiently using a simple analytic algorithm.
Abstract: In positron emission tomography (PET), a quantitative reconstruction of the tracer distribution requires accurate attenuation correction. We consider situations where a direct measurement of the attenuation coefficient of the tissues is not available or is unreliable, and where one attempts to estimate the attenuation sinogram directly from the emission data by exploiting the consistency conditions that must be satisfied by the non-attenuated data. We show that in time-of-flight PET, the attenuation sinogram is determined by the emission data except for a constant and that its gradient can be estimated efficiently using a simple analytic algorithm. The stability of the method is illustrated numerically by means of a 2D simulation.

Journal ArticleDOI
TL;DR: A 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology that is able to completely compensate the curvature of the wavefront in the near- field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform.
Abstract: This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.

Book
12 Feb 2012
TL;DR: E Echo-Planar Magnetic Resonance Imaging of Human Brain Activation and its Applications: Clinical Applications of Neuroimaging Using Echo- planar Imaging and Research Issues using Echo-planar Imaging for Functional Brain Imaging.
Abstract: 1. The Historical Development of Echo-Planar Magnetic Resonance Imaging.- 2. Theory of Echo-Planar Imaging.- 3. Echo-Planar Imaging Hardware.- 4. Echo-Planar Imaging Pulse Sequences.- 5. Echo-Planar Image Reconstruction.- 6. Echo-Planar Imaging Image Artifacts.- 7. Physiological Side Effects of Fast Gradient Switching.- 8. Echo-Planar Imaging Angiography.- 9. Diffusion Imaging with Echo-Planar Imaging.- 10. Echo-Planar Imaging of the Abdomen.- 11. Abdominal Diffusion Imaging Using Echo-Planar Imaging.- 12. Echo-Planar Imaging of the Heart.- 13. Perfusion Imaging with Echo-Planar Imaging.- 14. Clinical Applications of Neuroimaging Using Echo-Planar Imaging.- 15. Echo-Planar Magnetic Resonance Imaging of Human Brain Activation.- 16. Research Issues Using Echo-Planar Imaging for Functional Brain Imaging.- 17. Echo-Planar Imaging on Small-Bore Systems.- 18. Echo-Planar Imaging-Hybrids: Single Shot RARE.- 19. Echo-Planar Imaging-Hybrids: Turbo Spin-Echo Imaging.- 20. Echo-Planar Imaging-Hybrids: Gradient and Spin-Echo (GRASE) Imaging.- 21. Spiral Echo-Planar Imaging.

Journal ArticleDOI
TL;DR: FSMAR ensures sharp edges and a preservation of anatomical details which is in many cases better than after applying an inpainting-based MAR method only, and yields images without the usual blurring close to implants.
Abstract: Purpose : The problem of metal artifact reduction (MAR) is almost as old as the clinical use of computed tomography itself. When metal implants are present in the field of measurement, severe artifacts degrade the image quality and the diagnostic value of CTimages. Up to now, no generally accepted solution to this issue has been found. In this work, a method based on a new MAR concept is presented: frequency split metal artifact reduction (FSMAR). It ensures efficient reduction of metal artifacts at high image quality with enhanced preservation of details close to metal implants. Methods : FSMAR combines a raw data inpainting-based MAR method with an image-based frequency split approach. Many typical methods for metal artifact reduction are inpainting-based MAR methods and simply replace unreliable parts of the projection data, for example, by linear interpolation. Frequency split approaches were used in CT, for example, by combining two reconstruction methods in order to reduce cone-beam artifacts. FSMAR combines the high frequencies of an uncorrected image, where all available data were used for the reconstruction with the more reliable low frequencies of an image which was corrected with an inpainting-based MAR method. The algorithm is tested in combination with normalized metal artifact reduction (NMAR) and with a standard inpainting-based MAR approach. NMAR is a more sophisticated inpainting-based MAR method, which introduces less new artifacts which may result from interpolation errors. A quantitative evaluation was performed using the examples of a simulation of the XCAT phantom and a scan of a spine phantom. Further evaluation includes patients with different types of metal implants: hip prostheses, dental fillings, neurocoil, and spine fixation, which were scanned with a modern clinical dual source CT scanner. Results : FSMAR ensures sharp edges and a preservation of anatomical details which is in many cases better than after applying an inpainting-based MAR method only. In contrast to other MAR methods, FSMAR yields images without the usual blurring close to implants. Conclusions : FSMAR should be used together with NMAR, a combination which ensures an accurate correction of both high and low frequencies. The algorithm is computationally inexpensive compared to iterative methods and methods with complex inpainting schemes. No parameters were chosen manually; it is ready for an application in clinical routine.

Journal ArticleDOI
TL;DR: This work proposes a sparsity-driven method for joint SAR imaging and phase error correction that involves an iterative algorithm, where each iteration of which consists of consecutive steps of image formation and model error correction.
Abstract: Image formation algorithms in a variety of applications have explicit or implicit dependence on a mathematical model of the observation process. Inaccuracies in the observation model may cause various degradations and artifacts in the reconstructed images. The application of interest in this paper is synthetic aperture radar (SAR) imaging, which particularly suffers from motion-induced model errors. These types of errors result in phase errors in SAR data, which cause defocusing of the reconstructed images. Particularly focusing on imaging of fields that admit a sparse representation, we propose a sparsity-driven method for joint SAR imaging and phase error correction. Phase error correction is performed during the image formation process. The problem is set up as an optimization problem in a nonquadratic regularization-based framework. The method involves an iterative algorithm, where each iteration of which consists of consecutive steps of image formation and model error correction. Experimental results show the effectiveness of the approach for various types of phase errors, as well as the improvements that it provides over existing techniques for model error compensation in SAR.

Journal ArticleDOI
TL;DR: In a typical parallel magnetic resonance imaging reconstruction experiment, the bias in the overly optimistic results obtained with rasterized simulations-the inverse-crime situation is quantified.
Abstract: The quantitative validation of reconstruction algorithms requires reliable data Rasterized simulations are popular but they are tainted by an aliasing component that impacts the assessment of the performance of reconstruction We introduce analytical simulation tools that are suited to parallel magnetic resonance imaging and allow one to build realistic phantoms The proposed phantoms are composed of ellipses and regions with piecewise-polynomial boundaries, including spline contours, Bezier contours, and polygons In addition, they take the channel sensitivity into account, for which we investigate two possible models Our analytical formulations provide well-defined data in both the spatial and k-space domains Our main contribution is the closed-form determination of the Fourier transforms that are involved Experiments validate the proposed implementation In a typical parallel magnetic resonance imaging reconstruction experiment, we quantify the bias in the overly optimistic results obtained with rasterized simulations-the inverse-crime situation We provide a package that implements the different simulations and provide tools to guide the design of realistic phantoms

Journal ArticleDOI
TL;DR: This paper derives the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, which can accommodate a wide variety of analysis- and synthesis-type regularizers.
Abstract: Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches [based on Stein's unbiased risk estimate (SURE)] need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: predicted-SURE and projected-SURE (which require knowledge of noise variance σ2), and GCV (which does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation and an analysis-type l1-regularization. We demonstrate through simulations and experiments with real data that minimizing predicted-SURE and projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observe that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly suboptimal for MRI. Theoretical derivations in this paper related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms.

Journal ArticleDOI
TL;DR: Experimental and computational evidences obtained from this paper indicate that the proposed scheme for hyperspectral data compression and reconstruction has a high potential in real-world applications.
Abstract: Hyperspectral data processing typically demands enormous computational resources in terms of storage, computation, and input/output throughputs, particularly when real-time processing is desired. In this paper, a proof-of-concept study is conducted on compressive sensing (CS) and unmixing for hyperspectral imaging. Specifically, we investigate a low-complexity scheme for hyperspectral data compression and reconstruction. In this scheme, compressed hyperspectral data are acquired directly by a device similar to the single-pixel camera based on the principle of CS. To decode the compressed data, we propose a numerical procedure to compute directly the unmixed abundance fractions of given end members, completely bypassing high-complexity tasks involving the hyperspectral data cube itself. The reconstruction model is to minimize the total variation of the abundance fractions subject to a preprocessed fidelity equation with a significantly reduced size and other side constraints. An augmented Lagrangian-type algorithm is developed to solve this model. We conduct extensive numerical experiments to demonstrate the feasibility and efficiency of the proposed approach, using both synthetic data and hardware-measured data. Experimental and computational evidences obtained from this paper indicate that the proposed scheme has a high potential in real-world applications.