scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 1983"


Journal ArticleDOI
01 Mar 1983
TL;DR: These methods are based on the discretization of the image domain prior to any mathematical analysis and thus are rooted in a completely different branch of mathematics than the transform methods which are discussed in this issue.
Abstract: Series-expansion reconstruction methods made their first appearance in the scientific literature and in the CT scanner industry around 1970. Great research efforts have gone into them since but many questions still wait to be answered. These methods, synonymously known as algebraic methods, iterative algorithms, or optimization theory techniques, are based on the discretization of the image domain prior to any mathematical analysis and thus are rooted in a completely different branch of mathematics than the transform methods which are discussed in this issue by Lewitt [51]. How is the model set up? What is the methodology of the approach? Where does mathematical optimization theory enter? What do these reconstruction algorithms look like? How are quadratic optimization, entropy optimization, and Bayesian analysis used in image reconstruction? Finally, why study series expansion methods if transform methods are so much faster? These are some of the questions that are answered in this paper.

440 citations


Journal ArticleDOI
TL;DR: It is demonstrated here that the detailed time dependence of the resulting trajectory of sample points determines the relative weight and accuracy with which image information at each spatial frequency is measured, establishing theoretical limitations on image quality achievable with a given imaging method.
Abstract: The fundamental operations of nuclear magnetic resonance (NMR) imaging can be formulated, for a large number of methods, as sampling the object distribution in the Fourier spatial-frequency domain, followed by processing the digitized data (often simply by Fourier transformation) to produce a digital image. In these methods, which include reconstruction from projections, Fourier imaging, spin-warp imaging, and echo-planar imaging, controllable gradient fields determine the points in the spatial-frequency domain which are sampled at any given time during the acquisition of data (the free induction decay, or FID). The detailed time dependence of the resulting trajectory of sample points (the k trajectory) determines the relative weight and accuracy with which image information at each spatial frequency is measured, establishing theoretical limitations on image quality achievable with a given imaging method. We demonstrate here that these considerations may be used to compare the theoretical capabilities of NMR imaging methods, and to derive new imaging methods with optimal theoretical imaging properties.

429 citations


Journal ArticleDOI
01 Mar 1983
TL;DR: In this paper, the inversion formula for the case of 2D reconstruction from line integrals is manipulated into a number of different forms, each of which may be discretized to obtain different algorithms for reconstruction from sampled data.
Abstract: Transform methods for image reconstruction from projections are based on analytic inversion formulas. In this tutorial paper, the inversion formula for the case of two-dimensional (2-D) reconstruction from line integrals is manipulated into a number of different forms, each of which may be discretized to obtain different algorithms for reconstruction from sampled data. For the convolution-backprojection algorithm and the direct Fourier algorithm the emphasis is placed on understanding the relationship between the discrete operations specified by the algorithm and the functional operations expressed by the inversion formula. The performance of the Fourier algorithm may be improved, with negligible extra computation, by interleaving two polar sampling grids in Fourier space. The convolution-backprojection formulas are adapted for the fan-beam geometry, and other reconstruction methods are summarized, including the rho-filtered layergram method, and methods involving expansions in angular harmonics. A standard mathematical process leads to a known formula for iterative reconstruction from projections at a finite number of angles. A new iterative reconstruction algorithm is obtained from this formula by introducing one-dimensional (1-D) and 2-D interpolating functions, applied to sampled projections and images, respectively. These interpolating functions are derived by the same Fourier approach which aids in the development and understanding of the more conventional transform methods.

388 citations


Journal ArticleDOI
TL;DR: In this article, the Fourier diffraction projection theorem is extended to the case of image formation with diffracting illumination, which is an extension of the traditional Fourier slice theorem.
Abstract: From the standpoint of reporting a new contribution, this paper shows that by using bilinear interpolation followed by direct two-dimensional Fourier inversion, one can obtain reconstructions of quality which is comparable to that produced by the filtered-backpropagation algorithm proposed recently by Devaney. For an N × N image reconstructed from N diffracted projections, the former approach requires approximately 4N FFT's, whereas the backpropagation technique requires approximately N2FFT's. We have also taken this opportunity to present the reader with a tutorial introduction to diffraction tomography, an area that is becoming increasingly important not only in medical imaging, but also in underwater and seismic mapping with microwaves and sound. The main feature of the tutorial part is the statement of the Fourier diffraction projection theorem, which is an extension of the traditional Fourier slice theorem to the case of image formation with diffracting illumination.

345 citations


Journal ArticleDOI
TL;DR: The filtered backprojection algorithm of X-ray tomography and the filtered backpropagation algorithm developed recently by the author for diffraction tomography are tested in computer simulations of ultrasonic tomography of two-dimensional objects for which the Rytov approximation is valid.
Abstract: The filtered backprojection algorithm of X-ray tomography and the filtered backpropagation algorithm developed recently by the author for diffraction tomography are tested in computer simulations of ultrasonic tomography of two-dimensional objects for which the Rytov approximation is valid It is found that the filtered backprojection algorithm gives unsatisfactory results even for wavelengths much smaller than the smallest scale over which the object varies The filtered back-propogation algorithm yields, in all cases studied, high-quality reconstructions which are simply low-pass filtered versions of the actual object profile It is shown that the filtered backpropagation algorithm can be approximated by a modified backprojection algorithm having essentially the same computation requirements as filtered backprojection, but yielding considerably higher quality object reconstructions

217 citations


Journal ArticleDOI
TL;DR: A new operator framework is presented that treats all types of limited-data image-reconstruction problems in a unified way and derives iterative convolution backprojection algorithms that make no restrictions on the location of missing line integrals.
Abstract: Image-reconstruction algorithms implemented on existing computerized tomography (CT) scanners require the collection of line integrals that are evenly spaced over 360 deg. In many practical situations, some of the line integrals are inaccurately measured or are not measured at all. In these limited-data situations, conventional algorithms produce images with severe streak artifacts. Recently, several other image-reconstruction algorithms were suggested, each tailored to a specific type of limited-data problem. These algorithms make minimal use of a priori knowledge about the image; only one has been demonstrated with real x-ray data. We present a new operator framework that treats all types of limited-data image-reconstruction problems in a unified way. From this framework we derive iterative convolution backprojection algorithms that make no restrictions on the location of missing line integrals. All available a priori information is incorporated by constraint operators. The algorithm has been implemented on a commercial CT scanner. We present examples of images reconstructed from real x-ray data in two limited-data situations and demonstrate the use of additional a priori information to reduce streak artifacts further.

174 citations


Journal ArticleDOI
TL;DR: In this paper, a Bayesian maximum a posteriori (MAP) reconstruction method was proposed to reduce the null-space components of deterministic solutions, giving rise to unavoidable artifacts.
Abstract: An arbitrary source function cannot be determined fully from projection data that are limited in number and range of viewing angle. There exists a null subspace in the Hilbert space of possible source functions about which the available projection measurements provide no information. The null-space components of deterministic solutions are usually zero, giving rise to unavoidable artifacts. It is demonstrated that these artifacts may be reduced by a Bayesian maximum a posteriori (MAP) reconstruction method that permits the use of significant a priori information. Since normal distributions are assumed for the a priori and measurement-error probability densities, the MAP reconstruction method presented here is equivalent to the minimum-variance linear estimator with nonstationary mean and covariance ensemble characterizations. A more comprehensive Bayesian approach is suggested in which the ensemble mean and covariance specifications are adjusted on the basis of the measurements.

163 citations


Journal ArticleDOI
01 Mar 1983
TL;DR: The Dynamic Spatial Reconstructor (DSR) was designed by the Biodynamics Research Unit at the Mayo Clinic to provide synchronous volume imaging, that is stop-action, high-repetition rate, and simultaneous scanning of many parallel thin cross sections spanning the entire anatomic extent of the bodily organ(s)of interest.
Abstract: Most X-ray CT scanners require a few seconds to produce a single two-dimensional (2-D) image of a cross section of the body. The accuracy of full three-dimensional (3-D) images of the body synthesized from a contiguous set of 2-D images produced by sequential CT scanning of adjacent body slices is limited by 1) slice-to-slice registration (positioning of patient); 2) slice thickness; and 3) motion, both voluntary and involuntary, which occurs during the total time required to scan all slices. Therefore, this method is inadequate for true dynamic 3-D imaging of moving organs like the heart, lungs, and circulation. To circumvent these problems, the Dynamic Spatial Reconstructor (DSR) was designed by the Biodynamics Research Unit at the Mayo Clinic to provide synchronous volume imaging, that is stop-action (1/100 s), high-repetition rate (up to 60/s), simultaneous scanning of many parallel thin cross sections (up to 240, each 0.45 mm thick, 0.9 mm apart) spanning the entire anatomic extent of the bodily organ(s)of interest. These capabilities are achieved by using multiple X-ray sources and multiple 2-D fluoroscopic video camera assemblies on a continually rotating gantry. Desired tradeoffs between temporal, spatial, and density resolution can be achieved by retrospective selection and processing of appropriate subsets of the total data recorded during a continuous DSR scan sequence.

160 citations


Journal ArticleDOI
TL;DR: It is argued in this paper that the additional information present in the three-dimensional data is useful for improving reconstructions of images.
Abstract: List-mode data collected in a positron-emission tomography system having time-of-flight measurements are three dimensional, but all algorithms which have been published to date operate on two-dimensional data derived from these three-dimensional data. We argue in this paper that the additional information present in the three-dimensional data is useful for improving reconstructions of images.

145 citations


Journal ArticleDOI
TL;DR: In this article, a two-stage reconstruction procedure was proposed for reconstructing images from data acquired with a new type of gamma camera based upon an electronic method of collimating gamma radiation.
Abstract: Iterative algorithms have been investigated for reconstructing images from data acquired with a new type of gamma camera based upon an electronic method of collimating gamma radiation. The camera is composed of two detection systems which record a sequential interaction of the emitted gamma radiation. Coincident counting in accordance with Compton scattering kinematics leads to a localization of activity upon a multitude of conical surfaces throughout the object. A two-stage reconstruction procedure in which conical line projection images as seen by each position sensing element of the first detector are reconstructed in the first stage, and tomographic images are reconstructed in the second stage, has been developed. Computer simulation studies of both stages and first-stage reconstruction studies with preliminary experimental data are reported. Experimental data were obtained with one detection element of a prototype germanium detector. A microcomputer based circuit was developed to record coincident counts between the germanium detector and an uncollimated conventional scintillation camera. Point sources of Tc-99m and Cs-137 were used to perform preliminary measurements of sensitivity and point spread function characteristics of electronic collimation.

144 citations


Journal ArticleDOI
TL;DR: Iterative algorithms have been investigated for reconstructing images from data acquired with a new type of gamma camera based upon an electronic method of collimating gamma radiation, and a two-stage reconstruction procedure has been developed.
Abstract: Iterative algorithms have been investigated for reconstructing images from data acquired with a new type of gamma camera based upon an electronic method of collimating gamma radiation. The camera is composed of two detection systems which record a sequential interaction of the emitted gamma radiation. Coincident counting in accordance with Compton scattering kinematics leads to a localization of activity upon a multitude of conical surfaces throughout the object. A two-stage reconstruction procedure in which conical line projection images as seen by each position sensing element of the first detector are reconstructed in the first stage, and tomographic images are reconstructed in the second stage, has been developed. Computer simulation studies of both stages and first-stage reconstruction studies with preliminary experimental data are reported. Experimental data were obtained with one detection element of a prototype germanium detector. A microcomputer based circuit was developed to record coincident counts between the germanium detector and an uncollimated conventional scintillation camera. Point sources of Tc-99m and Cs-137 were used to perform preliminary measurements of sensitivity and point spread function characteristics of electronic collimation.

Journal ArticleDOI
TL;DR: The reconstruction problem due to nonlinear and nonplanar current paths in impedance imaging is overcome by solving Laplace's equation numerically for every iteration and by using a new back-projection algorithm to modify the impedance profile.
Abstract: In this paper, we propose to develop a new imaging technology, computerized impedance tomography (CIT) for imaging the thorax. Our study involves reconstructing images of thoracic transverse plane impedance distributions noninvasively and nondestructively and exploring the potential of CIT. We overcome the reconstruction problem due to nonlinear and nonplanar current paths in impedance imaging by solving Laplace’s equation numerically for every iteration and by using a new back-projection algorithm to modify the impedance profile. We discuss advantages and disadvantages associated with impedance imaging, the computer model, and back-projection algorithms used in reconstructing impedance images. We present reconstructed impedance images with 8 projection angles and different projection methods. We identify important variables affecting the quality of reconstructed images, and discuss the resolution and accuracy of this imaging technique. We summarize numerical aspects, computer requirements, and limitation...

Journal ArticleDOI
TL;DR: Two previously published postreconstruction beam hardening correction methods are described within a common framework and are compared from the points of view of the nearness of the corrected polychromatic projection data to the desired monochromatic projectionData and the visual quality of the reconstructions.
Abstract: The general nature of postreconstruction beam hardening correction methods is discussed. A methodology for choosing the energy of reconstruction is presented based on a technique of evaluating the "nearness" of two projection data sets. Two previously published postreconstruction beam hardening correction methods are described within a common framework. These methods differ at a number of independent places and so one can produce hybrid methods by interchanging some but not all of the choices. A basic difference between the methods is that one needs only the initial reconstruction during the postreconstruction correcting phase, while the other needs the original projection data as well. Both methods have been implemented and are compared (using a mathematical head phantom) from the points of view of the nearness of the corrected polychromatic projection data to the desired monochromatic projection data and the visual quality of the reconstructions. Variants and hybrids of the two methods are also investigated and recommendations based on the results are presented.

Journal ArticleDOI
TL;DR: An 11 by 11 image reconstructed from noisy scattered field data is shown to closely match the original scattering object, and the improvement possible by constraining the reconstruction to be spatially band limited is demonstrated.

Journal Article
TL;DR: The formulation of an "optimal" filter for improving the quality of digitally recorded nuclear medicine images is reported in this paper, which forms a Metz filter for each image based upon the total number of counts in the image, which in turn determines the average noise level.
Abstract: The formulation of an "optimal" filter for improving the quality of digitally recorded nuclear medicine images is reported in this paper. The method forms a Metz filter for each image based upon the total number of counts in the image, which in turn determines the average noise level. The parameters of the filter were optimized for a set of simulated images using the minimization of the mean-square error as the criterion. The speed of the image formation results from the use of an array processor. In a study of localization receiver operating characteristics (LROC) using the Alderson liver phantom, a significant improvement in tumor localization was found in images filtered with this technique, compared with the original digital images and those filtered by the nine-point binomial smoothing algorithm. The technique has been found useful for the filtering of static and dynamic studies as well as the two-dimensional pre-reconstruction filtering of images from single photon emission computerized tomography.

Journal ArticleDOI
TL;DR: The Bialy algorithm generalizes the Papoulis–Gerchberg iteration to cases in which the ideal low-pass operator is replaced by some other operators, which leads to new iterative algorithms for band-limited signal extrapolation.
Abstract: We deal with iterative least-squares solutions of the linear signal-restoration problem g = Af. First, several existing techniques for solving this problem with different underlying models are unified. Specifically, the following are shown to be special cases of a general iterative procedure [ BialyH., Arch. Ration. Mech. Anal.4, 166 ( 1959)] for solving linear operator equations in Hilbert spaces: (1) a Van Cittert-type algorithm for deconvolution of discrete and continuous signals; (2) an iterative procedure for regularization when g is contaminated with noise; (3) a Papoulis–Gerchberg algorithm for extrapolation of continuous signals [ PapoulisA., IEEE Trans. Circuits Syst.CAS-22, 735 ( 1975); GerchbergR. W., Opt. Acta21, 709 ( 1974)]; (4) an iterative algorithm for discrete extrapolation of band-limited infinite-extent discrete signals {and the minimum-norm property of the extrapolation obtained by the iteration [ JainA.RanganathS., IEEE Trans. Acoust. Speech Signal Process. ASSP-29, ( 1981)]}; and (5) a certain iterative procedure for extrapolation of band-limited periodic discrete signals [ TomV., IEEE Trans. Acoust. Speech Signal Process.ASSP-29, 1052 ( 1981)]. The Bialy algorithm also generalizes the Papoulis–Gerchberg iteration to cases in which the ideal low-pass operator is replaced by some other operators. In addition a suitable modification of this general iteration is shown. This technique leads us to new iterative algorithms for band-limited signal extrapolation. In numerical simulations some of these algorithms provide a fast reconstruction of the sought signal.

Proceedings ArticleDOI
14 Apr 1983
TL;DR: This paper presents a framework for detecting, locating and describing objects contained within a 2D cross-section by using noisy measurements of the Radon transform directly, rather than post-processing a reconstructed image.
Abstract: This paper considers the problem of observing a 2D function via its 1D projections (Radon transform); it presents a framework for detecting, locating and describing objects contained within a 2D cross-section by using noisy measurements of the Radon transform directly, rather than post-processing a reconstructed image. This framework offers the potential for significant improvements in applications where (1) attempts to perform an initial inversion with insufficient measurement data result in severely degraded reconstructions, and (2) the ultimate goal of the process is to obtain several specific pieces of information about the cross-section. To illustrate this perspective, we focus our attention on the problem of obtaining maximum-likelihood (ML) estimates of the parameters characterizing a single random object situated within a deterministic background medium, and we investigate the performance, robustness, and computational structure of the ML estimation procedure.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo reconstruction procedure is presented for retrieval of objects from coded images, where the reconstruction process is modeled as an optimization problem whose cost function is related to how well the coded-image constraints are satisfied.
Abstract: A new Monte Carlo reconstruction procedure is presented for retrieval of objects from their coded images. The reconstruction process is modeled as an optimization problem whose cost function is related to how well the coded-image constraints are satisfied. Reduction of the cost function is achieved by an annealing process analogous to the cooling of a melt to produce an ordered crystal. The method is demonstrated by reconstructing two two-dimensional objects from their one-dimensional coded images.

Journal ArticleDOI
TL;DR: A four-step phase-restoration procedure that provides faithful reconstructions of images consisting of faint detail superimposed upon bright, broad backgrounds is presented.
Abstract: A four-step phase-restoration procedure that provides faithful reconstructions of images consisting of faint detail superimposed upon bright, broad backgrounds is presented. Such images tend to cause existing phase-retrieval algorithms considerable difficulty. The two middle steps consist of the simplest of our recently reported phaserestoration schemes and of Fienup’s algorithm. The first and crucial step is to subtract a fraction of the central lobe of the given intensity of the Fourier transform of the image. The image is then reconstructed in two parts, which are combined in the final step. Encouraging computational results are presented.

Journal ArticleDOI
TL;DR: It is shown that, in the presence of noise, restoration by convex projections is superior to the Gerchberg-Papoulis method.
Abstract: In this paper we investigate how the method of convex projections for image restoration behaves in the presence of noise. We also introduce and test a new noise-smoothing procedure in which the restored image is forced to lie within a certain L2 distance of the noisy data. We show that, in the presence of noise, restoration by convex projections is superior to the Gerchberg-Papoulis method.

Journal ArticleDOI
TL;DR: A new method of image reconstruction for single photon emission computed tomography is presented, basically a filtered backprojection with some modifications, enabling improvement of the signal-to-noise ratio and spatial resolution at off-center area compared with the conventional averaging method of two conjugate projections.
Abstract: A new method of image reconstruction for single photon emission computed tomography is presented. The method is basically a filtered backprojection with some modifications. The algorithm consists of three steps: normalization of observed projections, modified convolution operation, and weighted backprojection. The weighting function for backprojection is determined to provide perfect attenuation compensation for a uniform attenuation medium and to keep the statistical noise in the reconstructed image low. The relative contributions of two conjugate projections to the image can be controlled by the reconstruction parameters, enabling improvement of the signal-to-noise ratio and spatial resolution at off-center area compared with the conventional averaging method of two conjugate projections. Simulation studies indicated that the method provides a satisfactory image for an extended source of 99mTc (mu = 0.15 cm-1) having a diameter of up to approximately 35 cm. A myocardium phantom is adequately reconstructed from a 180 approximately 225 degrees angle scan. The effect of nonuniform attenuation medium surrounding the source region can be corrected. This paper presents the mathematical basis of the procedure, the evaluation of the statistical noise, and some illustrative computer simulations.

Journal ArticleDOI
A. Macovski1
01 Mar 1983
TL;DR: In this article, a cross-sectional image of an object can be accurately reconstructed if its projections or line integrals are known at all angles, and this property has been applied to a variety of applications, primarily in the area of medical imaging.
Abstract: A cross-sectional image of an object can be accurately reconstructed if its projections or line integrals are known at all angles. This fundamental and exciting property has been applied to a variety of applications, primarily in the area of medical imaging. In many cases, however, the physical measurements fail to accurately define the complete set of line integrals. This leads to inaccuracies and distortions in the resultant reconstruction. The physical measurements can be inadequate in a number of ways. These include nonlinearities, noise, and insufficient data. The nonlinearities can arise from a nonlinear detector process, or the inability to accurately extract the information in the exponent by taking logs. The noise can be the usual statistical uncertainty of the measurement or an interfering component such as scatter. The data can be insufficient in a number of ways including inadequate sampling or regions of missing data. Also, the measurements of a source distribution can be distorted by an unknown attenuation distribution, resulting in errors in the reconstruction.

Journal ArticleDOI
TL;DR: It is shown that for system response functions of the specific form d(theta, phi)/r2, with d( theta,phi) an angular function describing the imaging system, the filter computation can always be reduced to a single integration which, in many cases, may be performed analytically.
Abstract: Application of the Fourier space deconvolution algorithm to three-dimensional (3D) reconstruction problems necessitates the computation of a frequency space filter; which requires taking the 3D Fourier transform of the system response function. In this paper, it is shown that for system response functions of the specific form d(theta, phi)/r2, with d(theta, phi) an angular function describing the imaging system, the filter computation can always be reduced to a single integration which, in many cases, may be performed analytically. Complete expressions are derived for the general 3D filter, and two examples are given to illustrate the use of such expressions.

Journal ArticleDOI
TL;DR: In this article, a simple recursive algorithm is proposed for reconstructing certain classes of two-dimensional objects from their autocorrelation functions (or equivalently from the modulus of their Fourier transforms).
Abstract: A simple recursive algorithm is proposed for reconstructing certain classes of two-dimensional objects from their autocorrelation functions (or equivalently from the modulus of their Fourier transforms—the phase-retrieval problem). The solution is shown to be unique in some cases. The objects contain reference points not satisfying the holography condition but satisfying weaker conditions. Included are objects described by Fiddy et al. [ Opt. Lett.8, 96 ( 1983)] satisfying Eisenstein’s thorem.


Book
01 Jan 1983
TL;DR: The limited data image reconstruction problem as an optimization problem is formulates and examples of images with reduced streak artifact generated from limited data are presented.
Abstract: Image reconstruction algorithms implemented on existing CT scanners require the collection of line integrals that are evenly spaced over 360 degrees.1 In many practical situations requirements for high temporal resolution or the presence of an x-ray opaque structure prevent the measurement of all the line integrals. Attempts to use existing algorithms in this "limited data" situation result in images with severe streak artifacts.2 This paper formulates the limited data image reconstruction problem as an optimization problem. An estimate of the missing data is sought which is consistent with the measured data and any a priori knowledge about the object. An iterative procedure computes a set of error signals at each step and uses these errors to improve the missing data estimate. A variety of iterative algorithms can be derived using different methods of updating the estimate. These algorithms have been implemented on a commercial CT scanner. Examples of images with reduced streak artifact generated from limited data are presented.

15 Aug 1983
TL;DR: In this article, an analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented, and it is shown that the quantities resulting from the average are biased, but that the biases are easily estimated and compensated.
Abstract: Abstract An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average are biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.

Journal ArticleDOI
TL;DR: Several methods which first estimate the missing data and then utilize standard reconstruction algorithms to obtain an image are investigated, and the incorporation of a priori information into the algorithm is shown to produce faster convergence.
Abstract: This paper addresses the task of image reconstruction from an incomplete set of projection data. Several methods which first estimate the missing data and then utilize standard reconstruction algorithms to obtain an image are investigated. Results from simulations are presented which illustrate the difficulty in comparing algorithms objectively, particularly when a simple test phantom is chosen. The incorporation of a priori information into the algorithm, an approach which has previously been discussed in the literature, is shown to produce faster convergence.

Journal ArticleDOI
TL;DR: In this paper, an analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented, and it is shown that the quantities resulting from the average arre biased, but that the biases are easily estimated and compensated.

Journal ArticleDOI
TL;DR: The comparison assesses the importance of including a correction for attenuation as well as demonstrating how closely a simple geometric attenuation correction, applied to the filtered back-projection reconstruction method, approximates to a more accurate correction incorporated in the computation of line integrals during iterative reconstruction.
Abstract: Transverse section tomograms of experimental phantoms and patients have been obtained using a GE 400T camera and a filtered back-projection reconstruction technique. These tomograms have been compared with the corresponding sections reconstructed from the same tomographic projection data, but using iterative algorithms with correction for photon attenuation. The comparison assesses the importance of including a correction for attenuation as well as demonstrating how closely a simple geometric attenuation correction, applied to the filtered back-projection reconstruction method, approximates to a more accurate correction incorporated in the computation of line integrals during iterative reconstruction. A comparison is also made between the behaviour of reconstruction algorithms with simulated projection data and real data in terms of convergence properties, and some shortcomings arising from simulation are noted.