scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 2016"


Proceedings ArticleDOI
27 Jun 2016
TL;DR: This paper presents the first convolutional neural network capable of real-time SR of 1080p videos on a single K2 GPU and introduces an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output.
Abstract: Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.

4,770 citations


Journal ArticleDOI
TL;DR: Aimed at researchers across multiple tomographic application fields, the ASTRA Toolbox provides a highly efficient and highly flexible open source set of tools for tomographic projection and reconstruction.
Abstract: Object reconstruction from a series of projection images, such as in computed tomography (CT), is a popular tool in many different application fields. Existing commercial software typically provides sufficiently accurate and convenient-to-use reconstruction tools to the end-user. However, in applications where a non-standard acquisition protocol is used, or where advanced reconstruction methods are required, the standard software tools often are incapable of computing accurate reconstruction images. This article introduces the ASTRA Toolbox. Aimed at researchers across multiple tomographic application fields, the ASTRA Toolbox provides a highly efficient and highly flexible open source set of tools for tomographic projection and reconstruction. The main features of the ASTRA Toolbox are discussed and several use cases are presented.

623 citations


Proceedings ArticleDOI
01 Jun 2016
TL;DR: A novel convolutional neural network architecture which takes in CS measurements of an image as input and outputs an intermediate reconstruction which is fed into an off-the-shelf denoiser to obtain the final reconstructed image, ReconNet.
Abstract: The goal of this paper is to present a non-iterative and more importantly an extremely fast algorithm to reconstruct images from compressively sensed (CS) random measurements. To this end, we propose a novel convolutional neural network (CNN) architecture which takes in CS measurements of an image as input and outputs an intermediate reconstruction. We call this network, ReconNet. The intermediate reconstruction is fed into an off-the-shelf denoiser to obtain the final reconstructed image. On a standard dataset of images we show significant improvements in reconstruction results (both in terms of PSNR and time complexity) over state-of-the-art iterative CS reconstruction algorithms at various measurement rates. Further, through qualitative experiments on real data collected using our block single pixel camera (SPC), we show that our network is highly robust to sensor noise and can recover visually better quality images than competitive algorithms at extremely low sensing rates of 0.1 and 0.04. To demonstrate that our algorithm can recover semantically informative images even at a low measurement rate of 0.01, we present a very robust proof of concept real-time visual tracking application.

598 citations


Journal ArticleDOI
TL;DR: The relationship between classic parallel imaging techniques and SMS reconstruction methods is explored and the practical implementation of SMS imaging is described, including the acquisition of reference data, and slice cross‐talk.
Abstract: Simultaneous multislice imaging (SMS) using parallel image reconstruction has rapidly advanced to become a major imaging technique. The primary benefit is an acceleration in data acquisition that is equal to the number of simultaneously excited slices. Unlike in-plane parallel imaging this can have only a marginal intrinsic signal-to-noise ratio penalty, and the full acceleration is attainable at fixed echo time, as is required for many echo planar imaging applications. Furthermore, for some implementations SMS techniques can reduce radiofrequency (RF) power deposition. In this review the current state of the art of SMS imaging is presented. In the Introduction, a historical overview is given of the history of SMS excitation in MRI. The following section on RF pulses gives both the theoretical background and practical application. The section on encoding and reconstruction shows how the collapsed multislice images can be disentangled by means of the transmitter pulse phase, gradient pulses, and most importantly using multichannel receiver coils. The relationship between classic parallel imaging techniques and SMS reconstruction methods is explored. The subsequent section describes the practical implementation, including the acquisition of reference data, and slice cross-talk. Published applications of SMS imaging are then reviewed, and the article concludes with an outlook and perspective of SMS imaging.

440 citations


Journal ArticleDOI
TL;DR: The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a $512\times 512$ image on the GPU.
Abstract: In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H*H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 x 512 image on GPU.

415 citations


Journal ArticleDOI
TL;DR: The combination of tomographic imaging and deep learning, or machine learning in general, promises to empower not only image analysis but also image reconstruction as discussed by the authors, and the latter aspect is considered in this perspective article with an emphasis on medical imaging to develop a new generation of image reconstruction theories and techniques.
Abstract: The combination of tomographic imaging and deep learning, or machine learning in general, promises to empower not only image analysis but also image reconstruction. The latter aspect is considered in this perspective article with an emphasis on medical imaging to develop a new generation of image reconstruction theories and techniques. This direction might lead to intelligent utilization of domain knowledge from big data, innovative approaches for image reconstruction, and superior performance in clinical and preclinical applications. To realize the full impact of machine learning for tomographic imaging, major theoretical, technical and translational efforts are immediately needed.

370 citations


Journal ArticleDOI
TL;DR: Recent developments in system design, image reconstruction, corrections, and the potential in new applications for TOF-PET are reviewed to introduce the reader in an educational way into the topic of time-of-flight-PET.
Abstract: While the first time-of-flight (TOF)-positron emission tomography (PET) systems were already built in the early 1980s, limited clinical studies were acquired on these scanners. PET was still a research tool, and the available TOF-PET systems were experimental. Due to a combination of low stopping power and limited spatial resolution (caused by limited light output of the scintillators), these systems could not compete with bismuth germanate (BGO)-based PET scanners. Developments on TOF system were limited for about a decade but started again around 2000. The combination of fast photomultipliers, scintillators with high density, modern electronics, and faster computing power for image reconstruction have made it possible to introduce this principle in clinical TOF-PET systems. This paper reviews recent developments in system design, image reconstruction, corrections, and the potential in new applications for TOF-PET. After explaining the basic principles of time-of-flight, the difficulties in detector technology and electronics to obtain a good and stable timing resolution are shortly explained. The available clinical systems and prototypes under development are described in detail. The development of this type of PET scanner also requires modified image reconstruction with accurate modeling and correction methods. The additional dimension introduced by the time difference motivates a shift from sinogram- to listmode-based reconstruction. This reconstruction is however rather slow and therefore rebinning techniques specific for TOF data have been proposed. The main motivation for TOF-PET remains the large potential for image quality improvement and more accurate quantification for a given number of counts. The gain is related to the ratio of object size and spatial extent of the TOF kernel and is therefore particularly relevant for heavy patients, where image quality degrades significantly due to increased attenuation (low counts) and high scatter fractions. The original calculations for the gain were based on analytical methods. Recent publications for iterative reconstruction have shown that it is difficult to quantify TOF gain into one factor. The gain depends on the measured distribution, the location within the object, and the count rate. In a clinical situation, the gain can be used to either increase the standardized uptake value (SUV) or reduce the image acquisition time or administered dose. The localized nature of the TOF kernel makes it possible to utilize local tomography reconstruction or to separate emission from transmission data. The introduction of TOF also improves the joint estimation of transmission and emission images from emission data only. TOF is also interesting for new applications of PET-like isotopes with low branching ratio for positron fraction. The local nature also reduces the need for fine angular sampling, which makes TOF interesting for limited angle situations like breast PET and online dose imaging in proton or hadron therapy. The aim of this review is to introduce the reader in an educational way into the topic of TOF-PET and to give an overview of the benefits and new opportunities in using this additional information.

277 citations


Journal ArticleDOI
TL;DR: This paper presents an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the nonlocal redundancy in images, and demonstrates that the algorithm produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods.
Abstract: Many material and biological samples in scientific imaging are characterized by nonlocal repeating structures. These are studied using scanning electron microscopy and electron tomography. Sparse sampling of individual pixels in a two-dimensional image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, can enable high speed data acquisition and minimize sample damage caused by the electron beam. In this paper, we present an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the nonlocal redundancy in images. We adapt a framework, termed plug-and-play priors, to solve these imaging problems in a regularized inversion setting. The power of the plug-and-play approach is that it allows a wide array of modern denoising algorithms to be used as a “prior model” for tomography and image interpolation. We also present sufficient mathematical conditions that ensure convergence of the plug-and-play approach, and we use these insights to design a new nonlocal means denoising algorithm. Finally, we demonstrate that the algorithm produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods.

267 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: In this article, a CNN-based approach is proposed for reconstructing a 3D face from a single image, which is based on a convolutional-neural-network (CNN) architecture.
Abstract: Fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications. Here, we introduce a learning-based approach for reconstructing a three-dimensional face from a single image. Recent face recovery methods rely on accurate localization of key characteristic points. In contrast, the proposed approach is based on a Convolutional-Neural-Network (CNN) which extracts the face geometry directly from its image. Although such deep architectures outperform other models in complex computer vision problems, training them properly requires a large dataset of annotated examples. In the case of three-dimensional faces, currently, there are no large volume data sets, while acquiring such big-data is a tedious task. As an alternative, we propose to generate random, yet nearly photo-realistic, facial images for which the geometric form is known. The suggested model successfully recovers facial shapes from real images, even for faces with extreme expressions and under various lighting conditions.

266 citations


Journal ArticleDOI
TL;DR: Experimental results using in vivo data for single/multicoil imaging as well as dynamic imaging confirmed that the proposed method outperforms the state-of-the-art pMRI and CS-MRI.
Abstract: Parallel MRI (pMRI) and compressed sensing MRI (CS-MRI) have been considered as two distinct reconstruction problems. Inspired by recent k-space interpolation methods, an annihilating filter-based low-rank Hankel matrix approach is proposed as a general framework for sparsity-driven k-space interpolation method which unifies pMRI and CS-MRI. Specifically, our framework is based on a novel observation that the transform domain sparsity in the primary space implies the low-rankness of weighted Hankel matrix in the reciprocal space. This converts pMRI and CS-MRI to a k-space interpolation problem using a structured matrix completion. Experimental results using in vivo data for single/multicoil imaging as well as dynamic imaging confirmed that the proposed method outperforms the state-of-the-art pMRI and CS-MRI.

252 citations


Journal ArticleDOI
TL;DR: This work presents fairSIM, an easy-to-use plugin that provides SR-SIM reconstructions for a wide range ofSR-SIM platforms directly within ImageJ, and can easily be adapted, automated and extended as the field of SR- SIM progresses.
Abstract: Super-resolved structured illumination microscopy (SR-SIM) is an important tool for fluorescence microscopy. SR-SIM microscopes perform multiple image acquisitions with varying illumination patterns, and reconstruct them to a super-resolved image. In its most frequent, linear implementation, SR-SIM doubles the spatial resolution. The reconstruction is performed numerically on the acquired wide-field image data, and thus relies on a software implementation of specific SR-SIM image reconstruction algorithms. We present fairSIM, an easy-to-use plugin that provides SR-SIM reconstructions for a wide range of SR-SIM platforms directly within ImageJ. For research groups developing their own implementations of super-resolution structured illumination microscopy, fairSIM takes away the hurdle of generating yet another implementation of the reconstruction algorithm. For users of commercial microscopes, it offers an additional, in-depth analysis option for their data independent of specific operating systems. As a modular, open-source solution, fairSIM can easily be adapted, automated and extended as the field of SR-SIM progresses.

Posted Content
TL;DR: To realize the full impact of machine learning for tomographic imaging, major theoretical, technical and translational efforts are immediately needed.
Abstract: The combination of tomographic imaging and deep learning, or machine learning in general, promises to empower not only image analysis but also image reconstruction. The latter aspect is considered in this perspective article with an emphasis on medical imaging to develop a new generation of image reconstruction theories and techniques. This direction might lead to intelligent utilization of domain knowledge from big data, innovative approaches for image reconstruction, and superior performance in clinical and preclinical applications. To realize the full impact of machine learning on medical imaging, major challenges must be addressed.

Posted Content
TL;DR: A novel deep residual learning approach for sparse view CT reconstruction using sparse projection views based on a novel persistent homology analysis showing that the manifold of streaking artifacts is topologically simpler than original ones is developed.
Abstract: Recently, compressed sensing (CS) computed tomography (CT) using sparse projection views has been extensively investigated to reduce the potential risk of radiation to patient. However, due to the insufficient number of projection views, an analytic reconstruction approach results in severe streaking artifacts and CS-based iterative approach is computationally very expensive. To address this issue, here we propose a novel deep residual learning approach for sparse view CT reconstruction. Specifically, based on a novel persistent homology analysis showing that the manifold of streaking artifacts is topologically simpler than original ones, a deep residual learning architecture that estimates the streaking artifacts is developed. Once a streaking artifact image is estimated, an artifact-free image can be obtained by subtracting the streaking artifacts from the input image. Using extensive experiments with real patient data set, we confirm that the proposed residual learning provides significantly better image reconstruction performance with several orders of magnitude faster computational speed.

Journal ArticleDOI
TL;DR: The primary goals of this paper are to identify the strengths and limitations of individual MAR methods and overall classes, and establish a relationship between types of metal objects and the classes that most effectively overcome their artifacts.
Abstract: Methods to overcome metal artifacts in computed tomography (CT) images have been researched and developed for nearly 40 years. When X-rays pass through a metal object, depending on its size and density, different physical effects will negatively affect the measurements, most notably beam hardening, scatter, noise, and the non-linear partial volume effect. These phenomena severely degrade image quality and hinder the diagnostic power and treatment outcomes in many clinical applications. In this paper, we first review the fundamental causes of metal artifacts, categorize metal object types, and present recent trends in the CT metal artifact reduction (MAR) literature. To improve image quality and recover information about underlying structures, many methods and correction algorithms have been proposed and tested. We comprehensively review and categorize these methods into six different classes of MAR: metal implant optimization, improvements to the data acquisition process, data correction based on physics models, modifications to the reconstruction algorithm (projection completion and iterative reconstruction), and image-based post-processing. The primary goals of this paper are to identify the strengths and limitations of individual MAR methods and overall classes, and establish a relationship between types of metal objects and the classes that most effectively overcome their artifacts. The main challenges for the field of MAR continue to be cases with large, dense metal implants, as well as cases with multiple metal objects in the field of view. Severe photon starvation is difficult to compensate for with only software corrections. Hence, the future of MAR seems to be headed toward a combined approach of improving the acquisition process with dual-energy CT, higher energy X-rays, or photon-counting detectors, along with advanced reconstruction approaches. Additional outlooks are addressed, including the need for a standardized evaluation system to compare MAR methods.

Journal ArticleDOI
TL;DR: The proposed method can be exploited in undersampled magnetic resonance imaging to reduce data acquisition time and reconstruct images with better image quality and the computation of the proposed approach is much faster than the typical K-SVD dictionary learning method in magnetic resonance image reconstruction.
Abstract: Objective: Improve the reconstructed image with fast and multiclass dictionaries learning when magnetic resonance imaging is accelerated by undersampling the k-space data. Methods: A fast orthogonal dictionary learning method is introduced into magnetic resonance image reconstruction to provide adaptive sparse representation of images. To enhance the sparsity, image is divided into classified patches according to the same geometrical direction and dictionary is trained within each class. A new sparse reconstruction model with the multiclass dictionaries is proposed and solved using a fast alternating direction method of multipliers. Results: Experiments on phantom and brain imaging data with acceleration factor up to 10 and various undersampling patterns are conducted. The proposed method is compared with state-of-the-art magnetic resonance image reconstruction methods. Conclusion: Artifacts are better suppressed and image edges are better preserved than the compared methods. Besides, the computation of the proposed approach is much faster than the typical K-SVD dictionary learning method in magnetic resonance image reconstruction. Significance: The proposed method can be exploited in undersampled magnetic resonance imaging to reduce data acquisition time and reconstruct images with better image quality.

Journal ArticleDOI
TL;DR: The Tomographic Iterative GPU-based Reconstruction (TIGRE) Toolbox, a MATLAB/CUDA toolbox for fast and accurate 3D x-ray image reconstruction, is presented and an overview of the structure and techniques used in the creation of the toolbox is presented.
Abstract: In this article the Tomographic Iterative GPU-based Reconstruction (TIGRE) Toolbox, a MATLAB/CUDA toolbox for fast and accurate 3D x-ray image reconstruction, is presented. One of the key features is the implementation of a wide variety of iterative algorithms as well as FDK, including a range of algorithms in the SART family, the Krylov subspace family and a range of methods using total variation regularization. Additionally, the toolbox has GPU-accelerated projection and back projection using the latest techniques and it has a modular design that facilitates the implementation of new algorithms. We present an overview of the structure and techniques used in the creation of the toolbox, together with two usage examples. The TIGRE Toolbox is released under an open source licence, encouraging people to contribute.

Journal ArticleDOI
TL;DR: NIRS is summarized and various approaches in the efforts to develop accurate and efficient DOT algorithms are described and some examples of clinical applications are presented to discuss the future prospects of DOT.
Abstract: Near-infrared diffuse optical tomography (DOT), one of the most sophisticated optical imaging techniques for observations through biological tissue, allows 3-D quantitative imaging of optical properties, which include functional and anatomical information. With DOT, it is expected to be possible to overcome the limitations of conventional near-infrared spectroscopy (NIRS) as well as offering the potential for diagnostic optical imaging. However, DOT has been under development for more than 30 years, and the difficulties in development are attributed to the fact that light is strongly scattered and that diffusive photons are used for the image reconstruction. The DOT algorithm is based on the techniques of inverse problems. The radiative transfer equation accurately describes photon propagation in biological tissue, while, because of its high computation load, the diffusion equation (DE) is often used as the forward model. However, the DE is invalid in low-scattering and/or highly absorbing regions and in the vicinity of light sources. The inverse problem is inherently ill-posed and highly undetermined. Here, we first summarize NIRS and then describe various approaches in the efforts to develop accurate and efficient DOT algorithms and present some examples of clinical applications. Finally, we discuss the future prospects of DOT.

Journal ArticleDOI
TL;DR: A novel iterative imaging method for optical tomography that combines a nonlinear forward model based on the beam propagation method (BPM) with an edge-preserving three-dimensional (3-D) total variation (TV) regularizer and a time-reversal scheme that allows for an efficient computation of the derivative of the transmitted wave-field with respect to the distribution of the refractive index.
Abstract: Optical tomographic imaging requires an accurate forward model as well as regularization to mitigate missing-data artifacts and to suppress noise. Nonlinear forward models can provide more accurate interpretation of the measured data than their linear counterparts, but they generally result in computationally prohibitive reconstruction algorithms. Although sparsity-driven regularizers significantly improve the quality of reconstructed image, they further increase the computational burden of imaging. In this paper, we present a novel iterative imaging method for optical tomography that combines a nonlinear forward model based on the beam propagation method (BPM) with an edge-preserving three-dimensional (3-D) total variation (TV) regularizer. The central element of our approach is a time-reversal scheme, which allows for an efficient computation of the derivative of the transmitted wave-field with respect to the distribution of the refractive index. This time-reversal scheme together with our stochastic proximal-gradient algorithm makes it possible to optimize under a nonlinear forward model in a computationally tractable way, thus enabling a high-quality imaging of the refractive index throughout the object. We demonstrate the effectiveness of our method through several experiments on simulated and experimentally measured data.

Journal ArticleDOI
TL;DR: The purpose of the present study is to provide an open source SIM reconstruction code (named OpenSIM), which enables users to interactively vary the code parameters and study it's effect on reconstructed SIM image.
Abstract: Structured illumination microscopy (SIM) is a very important super-resolution microscopy technique, which provides high speed super-resolution with about two-fold spatial resolution enhancement. Several attempts aimed at improving the performance of SIM reconstruction algorithm have been reported. However, most of these highlight only one specific aspect of the SIM reconstruction—such as the determination of the illumination pattern phase shift accurately—whereas other key elements—such as determination of modulation factor, estimation of object power spectrum, Wiener filtering frequency components with inclusion of object power spectrum information, translocating and the merging of the overlapping frequency components—are usually glossed over superficially. In addition, most of the study reported lie scattered throughout the literature and a comprehensive review of the theoretical background is found lacking. The purpose of the present study is two-fold: 1) to collect the essential theoretical details of SIM algorithm at one place, thereby making them readily accessible to readers for the first time; and 2) to provide an open source SIM reconstruction code (named OpenSIM), which enables users to interactively vary the code parameters and study it's effect on reconstructed SIM image.

Journal ArticleDOI
TL;DR: A graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions and outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.

Journal ArticleDOI
TL;DR: Experimental results and comparison with other fusion techniques indicate that the proposed algorithm is fast and produces similar or better results than existing techniques for both multi-exposure as well as multi-focus images.
Abstract: A multi-exposure and multi-focus image fusion algorithm is proposed. The algorithm is developed for color images and is based on blending the gradients of the luminance components of the input images using the maximum gradient magnitude at each pixel location and then obtaining the fused luminance using a Haar wavelet-based image reconstruction technique. This image reconstruction algorithm is of O(N) complexity and includes a Poisson solver at each resolution to eliminate artifacts that may appear due to the nonconservative nature of the resulting gradient. The fused chrominance, on the other hand, is obtained as a weighted mean of the chrominance channels. The particular case of grayscale images is treated as luminance fusion. Experimental results and comparison with other fusion techniques indicate that the proposed algorithm is fast and produces similar or better results than existing techniques for both multi-exposure as well as multi-focus images.

Journal ArticleDOI
TL;DR: A novel framework for the single depth image superresolution is proposed that is guided by a high-resolution edge map, which is constructed from the edges of the low-resolution depth image through a Markov random field optimization in a patch synthesis based manner.
Abstract: Recently, consumer depth cameras have gained significant popularity due to their affordable cost. However, the limited resolution and the quality of the depth map generated by these cameras are still problematic for several applications. In this paper, a novel framework for the single depth image superresolution is proposed. In our framework, the upscaling of a single depth image is guided by a high-resolution edge map, which is constructed from the edges of the low-resolution depth image through a Markov random field optimization in a patch synthesis based manner. We also explore the self-similarity of patches during the edge construction stage, when limited training data are available. With the guidance of the high-resolution edge map, we propose upsampling the high-resolution depth image through a modified joint bilateral filter. The edge-based guidance not only helps avoiding artifacts introduced by direct texture prediction, but also reduces jagged artifacts and preserves the sharp edges. Experimental results demonstrate the effectiveness of our method both qualitatively and quantitatively compared with the state-of-the-art methods.

Journal ArticleDOI
TL;DR: This work introduces an iterative method to correct position errors based on the simulated annealing (SA) algorithm and demonstrates that this method can both improve the quality of the recovered object image and relax the LED elements' position accuracy requirement while aligning the FPM imaging platforms.
Abstract: Fourier ptychographic microscopy (FPM) is a newly developed super-resolution technique, which employs angularly varying illuminations and a phase retrieval algorithm to surpass the diffraction limit of a low numerical aperture (NA) objective lens. In current FPM imaging platforms, accurate knowledge of LED matrix’s position is critical to achieve good recovery quality. Furthermore, considering such a wide field-of-view (FOV) in FPM, different regions in the FOV have different sensitivity of LED positional misalignment. In this work, we introduce an iterative method to correct position errors based on the simulated annealing (SA) algorithm. To improve the efficiency of this correcting process, large number of iterations for several images with low illumination NAs are firstly implemented to estimate the initial values of the global positional misalignment model through non-linear regression. Simulation and experimental results are presented to evaluate the performance of the proposed method and it is demonstrated that this method can both improve the quality of the recovered object image and relax the LED elements’ position accuracy requirement while aligning the FPM imaging platforms.

Book ChapterDOI
17 Oct 2016
TL;DR: It is shown that filtered back-projection can be mapped identically onto a deep neural network architecture and can be extended to any common CT artifact compensation heuristic and will outperform hand-crafted artifact correction methods in the future.
Abstract: In this paper, we demonstrate that image reconstruction can be expressed in terms of neural networks. We show that filtered back-projection can be mapped identically onto a deep neural network architecture. As for the case of iterative reconstruction, the straight forward realization as matrix multiplication is not feasible. Thus, we propose to compute the back-projection layer efficiently as fixed function and its gradient as projection operation. This allows a data-driven approach for joint optimization of correction steps in projection domain and image domain. As a proof of concept, we demonstrate that we are able to learn weightings and additional filter layers that consistently reduce the reconstruction error of a limited angle reconstruction by a factor of two while keeping the same computational complexity as filtered back-projection. We believe that this kind of learning approach can be extended to any common CT artifact compensation heuristic and will outperform hand-crafted artifact correction methods in the future.

Journal ArticleDOI
TL;DR: In this paper, the authors systematically summarize and evaluate various image reconstruction algorithms which have been studied and developed in the word for many years and to provide valuable reference for practical applications, including in the field of industrial multi-phase flow measurement and biological medical diagnosis.
Abstract: Purpose Electrical capacitance tomography (ECT) and electrical resistance tomography (ERT) are promising techniques for multiphase flow measurement due to their high speed, low cost, non-invasive and visualization features. There are two major difficulties in image reconstruction for ECT and ERT: the “soft-field”effect, and the ill-posedness of the inverse problem, which includes two problems: under-determined problem and the solution is not stable, i.e. is very sensitive to measurement errors and noise. This paper aims to summarize and evaluate various reconstruction algorithms which have been studied and developed in the word for many years and to provide reference for further research and application. Design/methodology/approach In the past 10 years, various image reconstruction algorithms have been developed to deal with these problems, including in the field of industrial multi-phase flow measurement and biological medical diagnosis. Findings This paper reviews existing image reconstruction algorithms and the new algorithms proposed by the authors for electrical capacitance tomography and electrical resistance tomography in multi-phase flow measurement and biological medical diagnosis. Originality/value The authors systematically summarize and evaluate various reconstruction algorithms which have been studied and developed in the word for many years and to provide valuable reference for practical applications.

Journal ArticleDOI
TL;DR: A novel fusion method based on a multi-task robust sparse representation (MRSR) model and spatial context information to address the fusion of multi-focus gray-level images with misregistration.
Abstract: We present a novel fusion method based on a multi-task robust sparse representation (MRSR) model and spatial context information to address the fusion of multi-focus gray-level images with misregistration. First, we present a robust sparse representation (RSR) model by replacing the conventional least-squared reconstruction error by a sparse reconstruction error. We then propose a multi-task version of the RSR model, viz., the MRSR model. The latter is then applied to multi-focus image fusion by employing the detailed information regarding each image patch and its spatial neighbors to collaboratively determine both the focused and defocused regions in the input images. To achieve this, we formulate the problem of extracting details from multiple image patches as a joint multi-task sparsity pursuit based on the MRSR model. Experimental results demonstrate that the suggested algorithm is competitive with the current state-of-the-art and superior to some approaches that use traditional sparse representation methods when input images are misregistered.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a projected iterative soft thresholding algorithm (pISTA) and its acceleration pFISTA for CS-MRI image reconstruction, which exploit sparsity of the magnetic resonance (MR) images under the redundant representation of tight frames.
Abstract: Compressed sensing (CS) has exhibited great potential for accelerating magnetic resonance imaging (MRI). In CS-MRI, we want to reconstruct a high-quality image from very few samples in a short time. In this paper, we propose a fast algorithm, called projected iterative soft-thresholding algorithm (pISTA), and its acceleration pFISTA for CS-MRI image reconstruction. The proposed algorithms exploit sparsity of the magnetic resonance (MR) images under the redundant representation of tight frames. We prove that pISTA and pFISTA converge to a minimizer of a convex function with a balanced tight frame sparsity formulation. The pFISTA introduces only one adjustable parameter, the step size, and we provide an explicit rule to set this parameter. Numerical experiment results demonstrate that pFISTA leads to faster convergence speeds than the state-of-art counterpart does, while achieving comparable reconstruction errors. Moreover, reconstruction errors incurred by pFISTA appear insensitive to the step size.

Journal ArticleDOI
TL;DR: A magnetic resonance imaging (MRI) reconstruction algorithm, which uses decoupled iterations alternating over a denoising step realized by the BM3D algorithm and a reconstruction step through an optimization formulation, which contributes to the reconstruction performance.
Abstract: The block matching 3D (BM3D) is an efficient image model, which has found few applications other than its niche area of denoising. We will develop a magnetic resonance imaging (MRI) reconstruction algorithm, which uses decoupled iterations alternating over a denoising step realized by the BM3D algorithm and a reconstruction step through an optimization formulation. The decoupling of the two steps allows the adoption of a strategy with a varying regularization parameter, which contributes to the reconstruction performance. This new iterative algorithm efficiently harnesses the power of the nonlocal, image-dependent BM3D model. The MRI reconstruction performance of the proposed algorithm is superior to state-of-the-art algorithms from the literature. A convergence analysis of the algorithm is also presented.

Journal ArticleDOI
TL;DR: This tutorial demonstrates a mathematical framework that has been specifically developed to calculate the Cramér-Rao lower bound for estimation problems in single molecule microscopy and, more broadly, fluorescence microscopy.
Abstract: Estimation of a parameter of interest from image data represents a task that is commonly carried out in single molecule microscopy data analysis. The determination of the positional coordinates of a molecule from its image, for example, forms the basis of standard applications such as single molecule tracking and localization-based super-resolution image reconstruction. Assuming that the estimator used recovers, on average, the true value of the parameter, its accuracy, or standard deviation, is then at best equal to the square root of the Cramer-Rao lower bound. The Cramer-Rao lower bound can therefore be used as a benchmark in the evaluation of the accuracy of an estimator. Additionally, as its value can be computed and assessed for different experimental settings, it is useful as an experimental design tool. This tutorial demonstrates a mathematical framework that has been specifically developed to calculate the Cramer-Rao lower bound for estimation problems in single molecule microscopy and, more broadly, fluorescence microscopy. The material includes a presentation of the photon detection process that underlies all image data, various image data models that describe images acquired with different detector types, and Fisher information expressions that are necessary for the calculation of the lower bound. Throughout the tutorial, examples involving concrete estimation problems are used to illustrate the effects of various factors on the accuracy of parameter estimation and, more generally, to demonstrate the flexibility of the mathematical framework.

Journal ArticleDOI
TL;DR: The results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used.
Abstract: Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Perot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.