scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 2003"


Journal ArticleDOI
TL;DR: The goal of this article is to introduce the concept of SR algorithms to readers who are unfamiliar with this area and to provide a review for experts to present the technical review of various existing SR methodologies which are often employed.
Abstract: A new approach toward increasing spatial resolution is required to overcome the limitations of the sensors and optics manufacturing technology. One promising approach is to use signal processing techniques to obtain an high-resolution (HR) image (or sequence) from observed multiple low-resolution (LR) images. Such a resolution enhancement approach has been one of the most active research areas, and it is called super resolution (SR) (or HR) image reconstruction or simply resolution enhancement. In this article, we use the term "SR image reconstruction" to refer to a signal processing approach toward resolution enhancement because the term "super" in "super resolution" represents very well the characteristics of the technique overcoming the inherent resolution limitation of LR imaging systems. The major advantage of the signal processing approach is that it may cost less and the existing LR imaging systems can be still utilized. The SR image reconstruction is proved to be useful in many practical cases where multiple frames of the same scene can be obtained, including medical imaging, satellite imaging, and video applications. The goal of this article is to introduce the concept of SR algorithms to readers who are unfamiliar with this area and to provide a review for experts. To this purpose, we present the technical review of various existing SR methodologies which are often employed. Before presenting the review of existing SR algorithms, we first model the LR image acquisition process.

3,491 citations


Book
01 Jan 2003
TL;DR: Introduction Preliminaries Image Reconstruction Image Presentation Key Performance Parameters of a CT Scanner Major Components of CT scanner Image Artifacts: Appearances, Causes, and Corrections Computer Simulation and Analysis.
Abstract: Introduction Preliminaries Image Reconstruction Image Presentation Key Performance Parameters of a CT Scanner Major Components of CT Scanner Image Artifacts: Appearances, Causes, and Corrections Computer Simulation and Analysis. Helical or Spiral CT Multislice CT X-ray Dose and Reduction Techniques Advanced CT Applications.

1,361 citations


Journal ArticleDOI
TL;DR: In this article, a review of existing image reconstruction algorithms for electrical capacitance tomography (ECT) is presented, including linear back-projection, singular value decomposition, Tikhonov regularization, Newton-Raphson, steepest descent method, Landweber iteration, conjugate gradient method, algebraic reconstruction techniques, simultaneous iterative reconstruction techniques and model-based reconstruction.
Abstract: Electrical capacitance tomography (ECT) is used to image cross-sections of industrial processes containing dielectric material. This technique has been under development for more than a decade. The task of image reconstruction for ECT is to determine the permittivity distribution and hence material distribution over the cross-section from capacitance measurements. There are three principal difficulties with image reconstruction for ECT: (1) the relationship between the permittivity distribution and capacitance is non-linear and the electric field is distorted by the material present, the so-called 'soft-field' effect; (2) the number of independent measurements is limited, leading to an under-determined problem and (3) the inverse problem is ill posed and ill conditioned, making the solution sensitive to measurement errors and noise. Regularization methods are needed to treat this ill-posedness. This paper reviews existing image reconstruction algorithms for ECT, including linear back-projection, singular value decomposition, Tikhonov regularization, Newton–Raphson, iterative Tikhonov, the steepest descent method, Landweber iteration, the conjugate gradient method, algebraic reconstruction techniques, simultaneous iterative reconstruction techniques and model-based reconstruction. Some of these algorithms are examined by simulation and experiment for typical permittivity distributions. Future developments in image reconstruction for ECT are discussed.

1,082 citations


Journal ArticleDOI
TL;DR: It is shown that the approximation error is bounded and present tools to increase or decrease the density of the points, thus allowing an adjustment of the spacing among the points to control the error.
Abstract: We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are approximated by the method of moving least squares (MLS). The computation of points on the surface is local, which results in an out-of-core technique that can handle any point set. We show that the approximation error is bounded and present tools to increase or decrease the density of the points, thus allowing an adjustment of the spacing among the points to control the error. To display the point set surface, we introduce a novel point rendering technique. The idea is to evaluate the local maps according to the image resolution. This results in high quality shading effects and smooth silhouettes at interactive frame rates.

1,005 citations


Journal ArticleDOI
TL;DR: In this article, an inversion method was used to reconstruct the image of the object without the need for any such prior knowledge, without the knowledge of the shape of the objects and the low spatial frequencies unavoidably lost in experiments.
Abstract: A solution to the inversion problem of scattering would offer aberration-free diffraction-limited three-dimensional images without the resolution and depth-of-field limitations of lens-based tomographic systems. Powerful algorithms are increasingly being used to act as lenses to form such images. Current image reconstruction methods, however, require the knowledge of the shape of the object and the low spatial frequencies unavoidably lost in experiments. Diffractive imaging has thus previously been used to increase the resolution of images obtained by other means. Here we experimentally demonstrate an inversion method, which reconstructs the image of the object without the need for any such prior knowledge.

787 citations


Proceedings ArticleDOI
18 Jun 2003
TL;DR: The novel contribution of the paper is the combination of these three previously developed components: image decomposition with inpainting and texture synthesis, which permits the simultaneous use of filling-in algorithms that are suited for different image characteristics.
Abstract: An algorithm for the simultaneous filling-in of texture and structure in regions of missing image information is presented. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these functions separately with structure and texture filling-in algorithms. The first function used in the decomposition is of bounded variation, representing the underlying image structure, while the second function captures the texture and possible noise. The region of missing information in the bounded variation image is reconstructed using image inpainting algorithms, while the same region in the texture image is filled-in with texture synthesis techniques. The original image is then reconstructed adding back these two sub-images. The novel contribution of the paper is then in the combination of these three previously developed components: image decomposition with inpainting and texture synthesis, which permits the simultaneous use of filling-in algorithms that are suited for different image characteristics. Examples on real images show the advantages of this proposed approach.

534 citations


Journal ArticleDOI
TL;DR: An alternative solution to accounting for breathing motion in radiotherapy treatment planning is presented, where multislice CT scans are collected simultaneously with digital spirometry over many free breathing cycles to create a four-dimensional (4-D) image set, where tidal lung volume is the additional dimension.
Abstract: Breathing motion is a significant source of error in radiotherapy treatment planning for the thorax and upper abdomen. Accounting for breathing motion has a profound effect on the size of conformal radiation portals employed in these sites. Breathing motion also causes artifacts and distortions in treatment planning computed tomography (CT) scans acquired during free breathing and also causes a breakdown of the assumption of the superposition of radiation portals in intensity-modulated radiation therapy, possibly leading to significant dose delivery errors. Proposed voluntary and involuntary breath-hold techniques have the potential for reducing or eliminating the effects of breathing motion, however, they are limited in practice, by the fact that many lung cancer patients cannot tolerate holding their breath. We present an alternative solution to accounting for breathing motion in radiotherapy treatment planning, where multislice CT scans are collected simultaneously with digital spirometry over many free breathing cycles to create a four-dimensional (4-D) image set, where tidal lung volume is the additional dimension. An analysis of this 4-D data leads to methods for digital-spirometry, based elimination or accounting of breathing motion artifacts in radiotherapy treatment planning for free breathing patients. The 4-D image set is generated by sorting free-breathing multislice CT scans according to user-defined tidal-volume bins. A multislice CT scanner is operated in the cine mode, acquiring 15 scans per couch position, while the patient undergoes simultaneous digital-spirometry measurements. The spirometry is used to retrospectively sort the CT scans by their correlated tidal lung volume within the patient's normal breathing cycle. This method has been prototyped using data from three lung cancer patients. The actual tidal lung volumes agreed with the specified bin volumes within standard deviations ranging between 22 and 33 cm3. An analysis of sagittal and coronal images demonstrated relatively small (<1 cm) motion artifacts along the diaphragm, even for tidal volumes where the rate of breathing motion is greatest. While still under development, this technology has the potential for revolutionizing the radiotherapy treatment planning for the thorax and upper abdomen.

501 citations


Journal ArticleDOI
TL;DR: The SR image reconstruction method estimates an HR image with finer spectral details from multiple LR observations degraded by blur, noise, and aliasing, and the major advantage of this approach is that it may cost less and the existing LR imaging systems can still be utilized.
Abstract: The spatial resolution that represents the number of pixels per unit area in an image is the principal factor in determining the quality of an image. With the development of image processing applications, there is a big demand for high-resolution (HR) images since HR images not only give the viewer a pleasing picture but also offer additional detail that is important for the analysis in many applications. The current technology to obtain HR images mainly depends on sensor manufacturing technology that attempts to increase the number of pixels per unit area by reducing the pixel size. However, the cost for high-precision optics and sensors may be inappropriate for general purpose commercial applications, and there is a limitation to pixel size reduction due to shot noise encountered in the sensor itself. Therefore, a resolution enhancement approach using signal processing techniques has been a great concern in many areas, and it is called super-resolution (SR) (or HR) image reconstruction or simply resolution enhancement in the literature. In this issue, we use the term “SR image reconstruction” to refer to a signal processing approach toward resolution enhancement, because the term “super” very well represents the characteristics of the technique overcoming the inherent resolution limitation of low-resolution (LR) imaging systems. The term SR was originally used in optics, and it refers to the algorithms that mainly operate on a single image to extrapolate the spectrum of an object beyond the diffraction limit (SR restoration). These two SR concepts (SR reconstruction and SR restoration) have a common focus in the aspect of recovering high-frequency information that is lost or degraded during the image acquisition. However, the cause of the loss of high-frequency information differs between these two concepts. SR restoration in optics attempts to recover information beyond the diffraction cutoff frequency, while the SR reconstruction method in engineering tries to recover high-frequency components corrupted by aliasing. We hope that readers do not confuse the super resolution in this issue with the term super resolution used in optics. SR image reconstruction algorithms investigate the relative motion information between multiple LR images (or a video sequence) and increase the spatial resolution by fusing them into a single frame. In doing so, it also removes the effect of possible blurring and noise in the LR images. In summary, the SR image reconstruction method estimates an HR image with finer spectral details from multiple LR observations degraded by blur, noise, and aliasing. The major advantage of this approach is that it may cost less and the existing LR imaging systems can still be utilized. Considering the maturity of this field and its various prospective applications, it seems timely and appropriate to discuss and adjust the topic of SR in the special issue of the magazine, since we do not have enough materials for ready disposal. This special section contains five articles covering various aspects of SR techniques. The first article, “Super-Resolution Image Reconstruction: A Technical Overview” by Sungcheol Park, Minkyu Park, and Moon Gi Kang, provides an introduction to the concepts and definitions of the SR image reconstruction as well as an overview of various existing SR algorithms. Advanced issues that are currently under investigation in this area are also discussed. The second article, “High-Resolution Images from Low-Resolution Compressed Video,” by Andrew C. Segall, Rafael Molina, and Aggelos K. Katsaggelos, considers the SR techniques for compressed video. Since images are routinely compressed prior to transmission and storage in current acquisition systems, it is important to take into account the characteristics of compression systems in developing the SR techniques. In this article, they survey models for the compression system and develop SR techniques within the Bayesian framework. The third article, by Deepu Rajan, Subhasis Chaudhuri, and Manjunath V. Joshi, titled “Multi-Objective Super-Resolution Technique: Concept and Examples,”

422 citations


Journal ArticleDOI
TL;DR: A tool for accelerating iterative reconstruction of field-corrected MR images: a novel time-segmented approximation to the MR signal equation that uses a min-max formulation to derive the temporal interpolator.
Abstract: In magnetic resonance imaging, magnetic field inhomogeneities cause distortions in images that are reconstructed by conventional fast Fourier transform (FFT) methods Several noniterative image reconstruction methods are used currently to compensate for field inhomogeneities, but these methods assume that the field map that characterizes the off-resonance frequencies is spatially smooth Recently, iterative methods have been proposed that can circumvent this assumption and provide improved compensation for off-resonance effects However, straightforward implementations of such iterative methods suffer from inconveniently long computation times This paper describes a tool for accelerating iterative reconstruction of field-corrected MR images: a novel time-segmented approximation to the MR signal equation We use a min-max formulation to derive the temporal interpolator Speedups of around 60 were achieved by combining this temporal interpolator with a nonuniform fast Fourier transform with normalized root mean squared approximation errors of 007% The proposed method provides fast, accurate, field-corrected image reconstruction even when the field map is not smooth

402 citations


Journal ArticleDOI
TL;DR: A method is described for using a limited number of low-dose radiographs to reconstruct the three-dimensional distribution of x-rays attenuation in the breast, using x-ray cone-beam imaging, an electronic digital detector, and constrained nonlinear iterative computational techniques.
Abstract: A method is described for using a limited number (typically 10–50) of low-dose radiographs to reconstruct the three-dimensional (3D) distribution of x-ray attenuation in the breast. The method uses x-ray cone-beam imaging, an electronic digital detector, and constrained nonlinear iterative computational techniques. Images are reconstructed with high resolution in two dimensions and lower resolution in the third dimension. The 3D distribution of attenuation that is projected into one image in conventional mammography can be separated into many layers (typically 30–80 1-mm-thick layers, depending on breast thickness), increasing the conspicuity of features that are often obscured by overlapping structure in a single-projection view. Schemes that record breast images at nonuniform angular increments, nonuniform image exposure, and nonuniform detector resolution are investigated in order to reduce the total x-ray exposure necessary to obtain diagnostically useful 3D reconstructions, and to improve the quality of the reconstructed images for a given exposure. The total patient radiation dose can be comparable to that used for a standard two-view mammogram. The method is illustrated with images from mastectomy specimens, a phantom, and human volunteers. The results show how image quality is affected by various data-collection protocols.

392 citations


Proceedings ArticleDOI
27 Oct 2003
TL;DR: A new approach to high quality 3D object reconstruction is presented, based on a deformable model, which defines the framework where texture and silhouette information can be fused and provides a robust way to integrate the silhouettes in the evolution algorithm.
Abstract: We present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multistereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multigrid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.

Journal ArticleDOI
TL;DR: This work proposes to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space, and shows that face-space super- Resolution is more robust to registration errors and noise than pixel-domain super- resolution because of the addition of model-based constraints.
Abstract: Face images that are captured by surveillance cameras usually have a very low resolution, which significantly limits the performance of face recognition systems. In the past, super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. These techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system. Considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method, we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space. Such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction. The reconstruction algorithm no longer tries to obtain a visually improved high-quality image, but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead. In addition, we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints.

Journal ArticleDOI
TL;DR: This tomography system provides robust, quantitative, full 3D image reconstructions with the advantages of high data throughput, single detector-tissue coupling path, and large (1L) imaging domains and it is found that point spread function measurements provide a useful and comprehensive representation of system performance.
Abstract: Three-dimensional diffuse optical tomography (DOT) of breast requires large data sets for even modest resolution (1 cm). We present a hybrid DOT system that combines a limited number of frequency domain (FD) measurements with a large set of continuous wave (cw) measurements. The FD measurements are used to quantitatively determine tissue averaged absorption and scattering coefficients. The larger cw data sets (10(5) measurements) collected with a lens coupled CCD, permit 3D DOT reconstructions of a 1-liter tissue volume. To address the computational complexity of large data sets and 3D volumes we employ finite difference based reconstructions computed in parallel. Tissue phantom measurements evaluate imaging performance. The tests include the following: point spread function measures of resolution, characterization of the size and contrast of single objects, field of view measurements and spectral characterization of constituent concentrations. We also report in vivo measurements. Average tissue optical properties of a healthy breast are used to deduce oxy- and deoxy-hemoglobin concentrations. Differential imaging with a tumor simulating target adhered to the surface of a healthy breast evaluates the influence of physiologic fluctuations on image noise. This tomography system provides robust, quantitative, full 3D image reconstructions with the advantages of high data throughput, single detector-tissue coupling path, and large (1L) imaging domains. In addition, we find that point spread function measurements provide a useful and comprehensive representation of system performance.

Journal ArticleDOI
TL;DR: The reconstructed tumor from the breast cancer patient was found to have a higher oxy-deoxy hemoglobin concentration and also a higher oxygen saturation level than the background, indicating a ductal carcinoma that corresponds well to histology findings.
Abstract: Three-dimensional (3D), multiwavelength near-infrared tomography has the potential to provide new physiological information about biological tissue function and pathological transformation. Fast and reliable measurements of multiwavelength data from multiple planes over a region of interest, together with adequate model-based nonlinear image reconstruction, form the major components of successful estimation of internal optical properties of the region. These images can then be used to examine the concentration of chromophores such as hemoglobin, deoxyhemoglobin, water, and lipids that in turn can serve to identify and characterize abnormalities located deep within the domain. We introduce and discuss a 3D modeling method and image reconstruction algorithm that is currently in place. Reconstructed images of optical properties are presented from simulated data, measured phantoms, and clinical data acquired from a breast cancer patient. It is shown that, with a relatively fast 3D inversion algorithm, useful images of optical absorption and scatter can be calculated with good separation and localization in all cases. It is also shown that, by use of the calculated optical absorption over a range of wavelengths, the oxygen saturation distribution of a tissue under investigation can be deduced from oxygenated and deoxygenated hemoglobin maps. With this method the reconstructed tumor from the breast cancer patient was found to have a higher oxy-deoxy hemoglobin concentration and also a higher oxygen saturation level than the background, indicating a ductal carcinoma that corresponds well to histology findings.

Journal ArticleDOI
TL;DR: This work investigates the possibility to focus synthetic aperture radar data relative to the same area, neglecting any mutual interaction between the targets, and assuming the propagation in homogeneous media, to achieve three-dimensional tomography reconstruction in presence of volumetric scattering in the elevation direction.
Abstract: Deals with the use of multipass synthetic aperture radar (SAR) data in order to achieve three-dimensional tomography reconstruction in presence of volumetric scattering. Starting from azimuth- and range-focused SAR data relative to the same area, neglecting any mutual interaction between the targets, and assuming the propagation in homogeneous media, we investigate the possibility to focus the data also in the elevation direction. The problem is formulated in the framework of linear inverse problem and the solution makes use of the singular value decomposition of the relevant operator. This allows us to properly take into account nonuniform orbit separation and to exploit a priori knowledge regarding the size of the volume interested by the scattering mechanism, thus leading to superresolution in the elevation direction. Results obtained on simulated data demonstrate the feasibility of the proposed processing technique.

Journal ArticleDOI
TL;DR: It is shown that the LP with orthogonal filters is a tight frame, and thus, the optimal linear reconstruction using the dual frame operator has a simple structure that is symmetric with the forward transform.
Abstract: Burt and Adelson (1983) introduced the Laplacian pyramid (LP) as a multiresolution representation for images. We study the LP using the frame theory, and this reveals that the usual reconstruction is suboptimal. We show that the LP with orthogonal filters is a tight frame, and thus, the optimal linear reconstruction using the dual frame operator has a simple structure that is symmetric with the forward transform. In more general cases, we propose an efficient filterbank (FB) for the reconstruction of the LP using projection that leads to a proved improvement over the usual method in the presence of noise. Setting up the LP as an oversampled FB, we offer a complete parameterization of all synthesis FBs that provide perfect reconstruction for the LP. Finally, we consider the situation where the LP scheme is iterated and derive the continuous-domain frames associated with the LP.

Journal ArticleDOI
TL;DR: MRI-based attenuation correction in 3D brain PET would likely be the method of choice for the foreseeable future as a second best approach in a busy nuclear medicine center and could be applied to other functional brain imaging modalities such as SPECT.
Abstract: Reliable attenuation correction represents an essential component of the long chain of modules required for the reconstruction of artifact-free, quantitative brainpositron emission tomography(PET)images. In this work we demonstrate the proof of principle of segmented magnetic resonance imaging (MRI)-guided attenuation and scatter corrections in three-dimensional (3D) brainPET. We have developed a method for attenuation correction based on registered T1-weighted MRI, eliminating the need of an additional transmission (TX) scan. The MRimages were realigned to preliminary reconstructions of PET data using an automatic algorithm and then segmented by means of a fuzzy clustering technique which identifies tissues of significantly different density and composition. The voxels belonging to different regions were classified into air, skull, brain tissue and nasal sinuses. These voxels were then assigned theoretical tissue-dependent attenuation coefficients as reported in the ICRU 44 report followed by Gaussian smoothing and addition of a good statistics bed image. The MRI-derived attenuation map was then forward projected to generate attenuation correction factors (ACFs) to be used for correcting the emission (EM) data. The method was evaluated and validated on 10 patient data where TX and MRIbrainimages were available. Qualitative and quantitative assessment of differences between TX-guided and segmented MRI-guided 3D reconstructions were performed by visual assessment and by estimating parameters of clinical interest. The results indicated a small but noticeable improvement in image quality as a consequence of the reduction of noise propagation from TX into EM data. Considering the difficulties associated with preinjection TX-based attenuation correction and the limitations of current calculated attenuation correction, MRI-based attenuation correction in 3D brainPET would likely be the method of choice for the foreseeable future as a second best approach in a busy nuclear medicine center and could be applied to other functional brainimaging modalities such as SPECT.

Journal ArticleDOI
TL;DR: This paper presents time-domain reconstruction algorithms for the thermoacoustic imaging of biological tissues to planar and cylindrical measurement configurations and generalizes the rigorous reconstruction formulas by employing Green's function technique.
Abstract: In this paper, we present time-domain reconstruction algorithms for the thermoacoustic imaging of biological tissues. The algorithm for a spherical measurement configuration has recently been reported in another paper. Here, we extend the reconstruction algorithms to planar and cylindrical measurement configurations. First, we generalize the rigorous reconstruction formulas by employing Green's function technique. Then, in order to detect small (compared with the measurement geometry) but deeply buried objects, we can simplify the formulas when two practical conditions exist: 1) that the high-frequency components of the thermoacoustic signals contribute more to the spatial resolution than the low-frequency ones, and 2) that the detecting distances between the thermoacoustic sources and the detecting transducers are much greater than the wavelengths of the high-frequency thermoacoustic signals (i.e., those that are useful for imaging). The simplified formulas are computed with temporal back projections and coherent summations over spherical surfaces using certain spatial weighting factors. We refer to these reconstruction formulas as modified back projections. Numerical results are given to illustrate the validity of these algorithms.

Journal ArticleDOI
TL;DR: Experimental results suggest that platelet-based methods can outperform standard reconstruction methods currently in use in confocal microscopy, image restoration, and emission tomography.
Abstract: The nonparametric multiscale platelet algorithms presented in this paper, unlike traditional wavelet-based methods, are both well suited to photon-limited medical imaging applications involving Poisson data and capable of better approximating edge contours. This paper introduces platelets, localized functions at various scales, locations, and orientations that produce piecewise linear image approximations, and a new multiscale image decomposition based on these functions. Platelets are well suited for approximating images consisting of smooth regions separated by smooth boundaries. For smoothness measured in certain Holder classes, it is shown that the error of m-term platelet approximations can decay significantly faster than that of m-term approximations in terms of sinusoids, wavelets, or wedgelets. This suggests that platelets may outperform existing techniques for image denoising and reconstruction. Fast, platelet-based, maximum penalized likelihood methods for photon-limited image denoising, deblurring and tomographic reconstruction problems are developed. Because platelet decompositions of Poisson distributed images are tractable and computationally efficient, existing image reconstruction methods based on expectation-maximization type algorithms can be easily enhanced with platelet techniques. Experimental results suggest that platelet-based methods can outperform standard reconstruction methods currently in use in confocal microscopy, image restoration, and emission tomography.

Journal ArticleDOI
TL;DR: It is shown that from any initial value the sequence generated by the SART converges to a weighted least square solution under the condition that coefficients of the linear imaging system are non-negative.
Abstract: Computed tomography (CT) has been extensively studied for years and widely used in the modern society. Although the filtered back-projection algorithm is the method of choice by manufacturers, efforts are being made to revisit iterative methods due to their unique advantages, such as superior performance with incomplete noisy data. In 1984, the simultaneous algebraic reconstruction technique (SART) was developed as a major refinement of the algebraic reconstruction technique (ART). However, the convergence of the SART has never been established since then. In this paper, the convergence is proved under the condition that coefficients of the linear imaging system are nonnegative. It is shown that from any initial guess the sequence generated by the SART converges to a weighted least square solution.

Journal ArticleDOI
TL;DR: The goal is to design and build a device specifically for imaging the function and anatomy of cancer in the most optimal and effective way, without conceptualizing it as combined PET and CT.

Journal ArticleDOI
TL;DR: A fully automatic technique for segmenting the airway tree in three-dimensional (3-D) CT images of the thorax using grayscale morphological reconstruction to identify candidate airways on CT slices and then reconstruct a connected 3-DAirway tree.
Abstract: The lungs exchange air with the external environment via the pulmonary airways Computed tomography (CT) scanning can be used to obtain detailed images of the pulmonary anatomy, including the airways These images have been used to measure airway geometry, study airway reactivity, and guide surgical interventions Prior to these applications, airway segmentation can be used to identify the airway lumen in the CT images Airway tree segmentation can be performed manually by an image analyst, but the complexity of the tree makes manual segmentation tedious and extremely time-consuming We describe a fully automatic technique for segmenting the airway tree in three-dimensional (3-D) CT images of the thorax We use grayscale morphological reconstruction to identify candidate airways on CT slices and then reconstruct a connected 3-D airway tree After segmentation, we estimate airway branchpoints based on connectivity changes in the reconstructed tree Compared to manual analysis on 3-mm-thick electron-beam CT images, the automatic approach has an overall airway branch detection sensitivity of approximately 73%

Proceedings ArticleDOI
19 Oct 2003
TL;DR: MOLAR, a motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction on a computer cluster with the following features is designed: direct use of list mode data with dynamic motion information (Polaris), and exact reprojection of each line-of- response (LOR).
Abstract: The HRRT PET system has the potential to produce human brain images with resolution better than 3 mm. To achieve the best possible accuracy and precision, we have designed MOLAR, a motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction on a computer cluster with the following features: direct use of list mode data with dynamic motion information (Polaris); exact reprojection of each line-of- response (LOR); system matrix computed from voxel-to-LOR distances (radial and axial); spatially varying resolution model implemented for each event by selection from precomputed line spread functions based on factors including detector obliqueness, crystal layer, and block detector position; distribution of events to processors and to subsets based on order of arrival; removal of voxels and events outside a reduced field-of-view defined by the attenuation map; no pre-corrections to Poisson data, i.e., all physical effects are defined in the model; randoms estimation from singles; model-based scatter simulation incorporated into the iterations; and component-based normalization. Preliminary computation estimates suggest that reconstruction of a single frame in one hour is achievable. Careful evaluation of this system will define which factors play an important role in producing high resolution, low-noise images with quantitative accuracy.

Journal ArticleDOI
TL;DR: A modified Tikhonov regularization method is introduced to include three-dimensional x-ray mammography as a prior in the diffuse optical tomography reconstruction and an approach is suggested to find the optimal regularization parameters.
Abstract: We introduce a modified Tikhonov regularization method to include three-dimensional x-ray mammography as a prior in the diffuse optical tomography reconstruction. With simulations we show that the optical image reconstruction resolution and contrast are improved by implementing this x-ray-guided spatial constraint. We suggest an approach to find the optimal regularization parameters. The presented preliminary clinical result indicates the utility of the method.

Journal ArticleDOI
TL;DR: This technique, which employs a pair of CCD cameras to detect the in-phase and quadrature components of the heterodyne signal simultaneously, offers the advantage of phase-drift suppression in interferometric measurement.
Abstract: A two-dimensional heterodyne detection technique based on the frequency-synchronous detection method [Jpn. J. Appl. Phys. 39, 1194 (2000)] is demonstrated for full-field optical coherence tomography. This technique, which employs a pair of CCD cameras to detect the in-phase and quadrature components of the heterodyne signal simultaneously, offers the advantage of phase-drift suppression in interferometric measurement. Horizontal cross-sectional images are acquired at the rate of 100 frames/s in a single longitudinal scan, with a depth interval of 6 microm, making the rapid reconstruction of three-dimensional images possible.

Journal ArticleDOI
TL;DR: It is found that in all cases the limit is the sum of the minimum norm solution of a weighted least-squares problem and an oblique projection of the initial image onto the null space of the system matrix.
Abstract: We introduce a general iterative scheme for image reconstruction based on Landweber's method. In our configuration, a sequential block-iterative (SeqBI) version can be readily formulated from a simultaneous block-iterative (SimBI) version, and vice versa. This provides a mechanism to derive new algorithms from known ones. It is shown that some widely used iterative algorithms, such as the algebraic reconstruction technique (ART), simultaneous ART (SART), Cimmino's, and the recently designed diagonal weighting and component averaging algorithms, are special examples of the general scheme. We prove convergence of the general scheme under conditions more general than assumed in earlier studies, for its SeqBI and SimBI versions in the consistent and inconsistent cases, respectively. Our results suggest automatic relaxation strategies for the SeqBI and SimBI versions and characterize the dependence of the limit image on the initial guess. It is found that in all cases the limit is the sum of the minimum norm solution of a weighted least-squares problem and an oblique projection of the initial image onto the null space of the system matrix.

Journal ArticleDOI
TL;DR: A 2D Fourier-transform-based reconstruction algorithm that is significantly faster and produces fewer artifacts than simple radial backprojection methods is described.
Abstract: Theoretical and experimental aspects of two-dimensional (2D) biomedical photoacoustic imaging have been investigated. A 2D Fourier-transform-based reconstruction algorithm that is significantly faster and produces fewer artifacts than simple radial backprojection methods is described. The image-reconstruction time for a 208 X 482 pixel image is similar to1 s. For the practical implementation of 2D photoacoustic imaging, a rectangular detector geometry was used to obtain an anisotropic detection sensitivity in order to reject out-of-plane signals, thereby permitting a tomographic image slice to be reconstructed. This approach was investigated by the numerical modeling of the broadband directional response of a rectangular detector and imaging of various spatially calibrated absorbing targets immersed in a turbid phantom. The experimental setup was based on a Q-switched Nd:YAG excitation laser source and a mechanically line-scanned Fabry-Perot polymer-film ultrasound sensor. For a 800 mum x 200 mum rectangular detector, the reconstructed image slice thickness was 0.8 mm up to a vertical distance of z = 3.5 mm from the detector, increasing thereafter to 2 mm at z = 10 mm. Horizontal and vertical spatial resolutions within the reconstructed slice were approximately 200 and 60 mum, respectively. (C) 2003 Optical Society of America.

Journal ArticleDOI
TL;DR: The method demonstrates improved image quality in all cases when compared to the conventional FBP and EM methods presently used for clinical data (which do not include resolution modeling).
Abstract: Methodology for PET system modeling using image-space techniques in the expectation maximization (EM) algorithm is presented. The approach, applicable to both list-mode data and projection data, is of particular significance to EM algorithm implementations which otherwise only use basic system models (such as those which calculate the system matrix elements on the fly). A basic version of the proposed technique can be implemented using image-space convolution, in order to include resolution effects into the system matrix, so that the EM algorithm gradually recovers the modeled resolution with each update. The improved system modeling (achieved by inclusion of two convolutions per iteration) results in both enhanced resolution and lower noise, and there is often no need for regularization-other than to limit the number of iterations. Tests have been performed with simulated list-mode data and also with measured projection data from a GE Advance PET scanner, for both [/sup 18/F]-FDG and [/sup 124/I]-NaI. The method demonstrates improved image quality in all cases when compared to the conventional FBP and EM methods presently used for clinical data (which do not include resolution modeling). The benefits of this approach for /sup 124/I (which has a low positron yield and a large positron range, usually resulting in noisier and poorer resolution images) are particularly noticeable.

Journal ArticleDOI
TL;DR: The attainment of super resolution (SR) from a sequence of degraded undersampled images could be viewed as reconstruction of the high-resolution (HR) image from a finite set of its projections on a sampling lattice as an optimization problem whose solution is obtained by minimizing a cost function.
Abstract: The attainment of super resolution (SR) from a sequence of degraded undersampled images could be viewed as reconstruction of the high-resolution (HR) image from a finite set of its projections on a sampling lattice. This can then be formulated as an optimization problem whose solution is obtained by minimizing a cost function. The approaches adopted and their analysis to solve the formulated optimization problem are crucial, The image acquisition scheme is important in the modeling of the degradation process. The need for model accuracy is undeniable in the attainment of SR along with the design of the algorithm whose robust implementation will produce the desired quality in the presence of model parameter uncertainty. To keep the presentation focused and of reasonable size, data acquisition with multisensors instead of, say a video camera is considered.

Journal ArticleDOI
15 Sep 2003
TL;DR: The paper presents a broad overview of algorithms for PET and SPECT, giving references to the literature where these algorithms and their applications are described in more detail.
Abstract: Emission computed tomography (ECT) is a technology for medical imaging whose importance is increasing rapidly. There is a growing appreciation for the value of the functional (as opposed to anatomical) information that is provided by ECT and there are significant advancements taking place, both in the instrumentation for data collection, and in the computer methods for generating images from the measured data. These computer methods are designed to solve the inverse problem known as "image reconstruction from projections". This paper uses the various models of the data collection process as the framework for presenting an overview of the wide variety of methods that have been developed for image reconstruction in the major subfields of ECT, which are positron emission tomography (PET) and single-photon emission computed tomography (SPECT). The overall sequence of the major sections in the paper, and the presentation within each major section, both proceed from the more realistic and general models to those that are idealized and application specific. For most of the topics, the description proceeds from the three-dimensional case to the two-dimensional case. The paper presents a broad overview of algorithms for PET and SPECT, giving references to the literature where these algorithms and their applications are described in more detail.