scispace - formally typeset
Search or ask a question

Showing papers on "Iterative reconstruction published in 1991"


Journal ArticleDOI
TL;DR: It is suggested that synchronous detection or demodulation can be used in MRI systems in place of magnitude detection to provide complete suppression of undesired quadrature components, to preserve polarity and phase information, and to eliminate the biases and reduction in signal-to-noise ratio (SNR) and contrast in low SNR images.
Abstract: Magnetic detection of complex images in magnetic resonance imaging (MRI) is immune to the effects of incidental phase variations, although in some applications information is lost or images are degraded. It is suggested that synchronous detection or demodulation can be used in MRI systems in place of magnitude detection to provide complete suppression of undesired quadrature components, to preserve polarity and phase information, and to eliminate the biases and reduction in signal-to-noise ratio (SNR) and contrast in low SNR images. The incidental phase variations in an image are removed through the use of a homodyne demodulation reference, which is derived from the image or the object itself. Synchronous homodyne detection has been applied to the detection of low SNR images, the reconstruction of partial k-space images, the simultaneous detection of water and lipid signals in quadrature, and the preservation of polarity in inversion-recovery images. >

574 citations


Journal ArticleDOI
TL;DR: Deterministic approximations to Markov random field (MRF) models are derived and one of the models is shown to give in a natural way the graduated nonconvexity (GNC) algorithm proposed by A. Blake and A. Zisserman (1987).
Abstract: Deterministic approximations to Markov random field (MRF) models are derived. One of the models is shown to give in a natural way the graduated nonconvexity (GNC) algorithm proposed by A. Blake and A. Zisserman (1987). This model can be applied to smooth a field preserving its discontinuities. A class of more complex models is then proposed in order to deal with a variety of vision problems. All the theoretical results are obtained in the framework of statistical mechanics and mean field techniques. A parallel, iterative algorithm to solve the deterministic equations of the two models is presented, together with some experiments on synthetic and real images. >

486 citations


Journal ArticleDOI
TL;DR: The adaptively restored images have better quality than the nonadaptively restored ones based on visual observations and on an objective criterion of merit which accounts for the noise masking property of the visual system.
Abstract: The development of the algorithm is based on a set theoretic approach to regularization. Deterministic and/or statistical information about the undistorted image and statistical information about the noise are directly incorporated into the iterative procedure. The restored image is the center of an ellipsoid bounding the intersection of two ellipsoids. The proposed algorithm, which has the constrained least squares algorithm as a special case, is extended into an adaptive iterative restoration algorithm. The spatial adaptivity is introduced to incorporate properties of the human visual system. Convergence of the proposed iterative algorithms is established. For the experimental results which are shown, the adaptively restored images have better quality than the nonadaptively restored ones based on visual observations and on an objective criterion of merit which accounts for the noise masking property of the visual system. >

342 citations


Journal ArticleDOI
TL;DR: In this paper, a local, nonlinear transform is derived that allows an STM or AFM image which has been distorted by a nonideal tip to be reconstructed, where the image reconstruction transform is related to the Legendre transforms of the distorted image surface and the tip surface.

329 citations


Journal ArticleDOI
TL;DR: A survey is presented of some of the surface reconstruction methods that can be found in the literature; the focus is on a small, recent, and important subset of the published reconstruction techniques.
Abstract: A survey is presented of some of the surface reconstruction methods that can be found in the literature; the focus is on a small, recent, and important subset of the published reconstruction techniques. The techniques are classified based on the surface representation used, implicit versus explicit functions. A study is made of the important aspects of the surface reconstruction techniques. One aspect is the viewpoint invariance of the methods. This is an important property if object recognition is the ultimate objective. The robustness of the various methods is examined. It is determined whether the parameter estimates are biased, and the sensitivity to obscuration is addressed. The latter two aspects are particularly important for fitting functions in the implicit form. A detailed description is given of a parametric reconstruction method for three-dimensional object surfaces that involves numeric grid generation techniques and variational principle formulations. This technique is invariant to rigid motion in dimensional space. >

299 citations



Proceedings ArticleDOI
03 Jun 1991
TL;DR: An approach to visual sampling and reconstruction motivated by concepts from numerical grid generation is presented, and adaptive meshes that can nonuniformly sample and reconstruct intensity and range data are presented.
Abstract: An approach to visual sampling and reconstruction motivated by concepts from numerical grid generation is presented. Adaptive meshes that can nonuniformly sample and reconstruct intensity and range data are presented. These meshes are dynamic models which are assembled by interconnecting nodal masses with adjustable springs. Acting as mobile sampling sites, the nodes observe properties of the input data, such as intensities, depths, gradients, and curvatures. Based on these nodal observations, the springs automatically adjust their stiffnesses so as to distribute the available degrees of freedom of the reconstructed model in accordance with the local complexity of the input data. The adaptive mesh algorithm runs at interactive rates with continuous 3-D display on a graphics workstation It is applied to the adaptive sampling and reconstruction of images and surfaces. >

217 citations


Journal ArticleDOI
02 Nov 1991
TL;DR: The quantitative accuracy of cardiac SPECT is significantly improved using simultaneously acquired transmission and emission data which are obtained in clinically acceptable patient scanning times.
Abstract: Photon attenuation in cardiac single photon emission computed tomography (SPECT) is a major factor contributing to the quantitative inaccuracy and the decrease in specificity of lesion detection A measured map of the attenuation distribution was used in combination with iterative reconstruction algorithms to accurately compensate for the variable attenuation in the chest The transmission and emission data were acquired simultaneously using a multidetector, fan-beam collimated SPECT system with a transmission line source (Tc-99m) precisely aligned at the focal line opposite one of the detectors and of a different energy than the emission source (Tl-201) The contamination of transmission data by the high-energy photopeaks of Tl-201 was removed based on measurements from emission-only acquisition detectors An algorithm was derived to eliminate scatter of Tc-99m transmission photons into the lower-energy Tl-201 window of all three detectors Results are given for both phantom and patient studies >

186 citations


Book ChapterDOI
07 Jul 1991
TL;DR: This paper develops a Bayesian algorithm for PET image reconstruction in which a magnetic resonance image is used to provide information about the location of potential discontinuities in the PET image, and demonstrates that the use of a line process in the reconstruction process has the potential for significant improvements in reconstructed image quality, particularly when prior MR edge information is available.
Abstract: A statistical approach to PET image reconstruction offers several potential advantages over the filtered backprojection method currently employed in most clinical PET systems: (1) the true data formation process may be modeled accurately to include the Poisson nature of the observation process and factors such as attenuation, scatter, detector efficiency and randoms; and (2) an a priori statistical model for the image may be employed to model the generally smooth nature of the desired spatial distribution and to include information such as the presence of anatomical boundaries, and hence potential discontinuities, in the image. In this paper we develop a Bayesian algorithm for PET image reconstruction in which a magnetic resonance image is used to provide information about the location of potential discontinuities in the PET image. This is achieved through the use of a Markov random field model for the image which incorporates a “line process” to model the presence of discontinuities. In the case where no a priori edge information is available, this line process may be estimated directly from the data. When edges are available from MR images, this information is introduced as a set of known a priori line sites in the image. It is demonstrated through computer simulation, that the use of a line process in the reconstruction process has the potential for significant improvements in reconstructed image quality, particularly when prior MR edge information is available.

169 citations


Journal ArticleDOI
TL;DR: Reconstruction procedures that account for attenuation in forming maximum-likelihood estimates of activity distributions in positron-emission tomography are extended to include regularization constraints and accidental coincidences.
Abstract: Reconstruction procedures that account for attenuation in forming maximum-likelihood estimates of activity distributions in positron-emission tomography are extended to include regularization constraints and accidental coincidences. A mathematical model is used for these effects. The corrections are incorporated into the iterations of an expectation-maximization algorithm for numerically producing the maximum-likelihood estimate of the distribution of radioactivity within a patient. The images reconstructed with this procedure are unbiased and exhibit lower variance than those reconstructed from precorrected data. >

153 citations


Journal ArticleDOI
TL;DR: In this article, a preliminary study of the application of simulated annealing (SA) to complex permittivity reconstruction in microwave tomography is presented, and the results show that SA can converge to an accurate solution in cases where the two deterministic methods fail.
Abstract: A preliminary study of the application of simulated annealing (SA) to complex permittivity reconstruction in microwave tomography is presented. Reconstructions of a simplified model of a human arm obtained with simulated noise-free data are presented for three different methods: SA, quenching, and a Newton-Kantorovich method. These results show that SA can converge to an accurate solution in cases where the two deterministic methods fail. For this reason SA can be used to get closer to the final solution before applying a faster deterministic method. >

Journal Article
TL;DR: A segmented attenuation Correction technique has been developed for positron emission tomography which computes attenuation correction factors automatically from transmission images for use in the final image reconstruction.
Abstract: A segmented attenuation correction technique has been developed for positron emission tomography which computes attenuation correction factors automatically from transmission images for use in the final image reconstruction. The technique segments the transmission image into anatomic regions by thresholding the histogram of the attenuation values corresponding to different regions such as soft tissue and lungs. Average values of attenuation are derived from these regions and new attenuation correction factors are computed by forward projection of these regions into sinograms for correction of emission images. The technique has been tested with phantom studies and with clinical cardiac studies in patients for 30- and 10-min attenuation scan times. This method for attenuation correction was linearly correlated (slope = 0.937 and r2 = 0.935) with the standard directly measured method, reducing noise in the final image, and reducing the attenuation scan time.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the fidelity of the reconstructed image by computing the response to an impulse-source function under various non-idealized but more realistic conditions, such as finite receiving aperture and circular geometry.
Abstract: The potential of imaging large ionospheric structures by applying tomographic techniques has only been proposed recently and has already received interest and increasing attention by workers in ionospheric research. In the combined case of an idealized plane geometry and complete and continuous data extending to {plus minus} infinity the inversion is exact and unique. This is no longer the case if any of the idealized conditions are removed. Specifically, the limitations are investigated that arise from limited angles, sampled data of finite receiving aperture and circular geometry. Under these various nonidealized but more realistic conditions, fidelity of the reconstructed image is investigated by computing the response to an impulse-source function. The deterioation of the reconstructed image, the sensitivity to the position in the image plane, and the possible existence of ghost images are all numerically investigated and illustrated. Possible methods of improvement are suggested. 23 refs.

Journal ArticleDOI
TL;DR: An iterative reconstruction method which minimizes the effects of ill-conditioning is discussed and a regularization method which integrates prior information into the image reconstruction was developed which improves the conditioning of the information matrix in the modified Newton-Raphson algorithm.
Abstract: An iterative reconstruction method which minimizes the effects of ill-conditioning is discussed. Based on the modified Newton-Raphson algorithm, a regularization method which integrates prior information into the image reconstruction was developed. This improves the conditioning of the information matrix in the modified Newton-Raphson algorithm. Optimal current patterns were used to obtain voltages with maximal signal-to-noise ratio (SNR). A complete finite element model (FEM) was used for both the internal and the boundary electric fields. Reconstructed images from phantom data show that the use of regularization optimal current patterns, and a complete FEM model improves image accuracy. The authors also investigated factors affecting the image quality of the iterative algorithm such as the initial guess, image iteration, and optimal current updating. >

Journal ArticleDOI
TL;DR: The convergence rate of the weighted least squares with conjugate gradient (WLS-CG) algorithm is about ten times that of the maximum likelihood with expectation maximization (ML-EM) algorithm.
Abstract: The properties of the maximum likelihood with expectation maximization (ML-EM) and the weighted least squares with conjugate gradient (WLS-CG) algorithms for use in compensation for attenuation and detector response in cardiac SPECT imaging were studied. A realistic phantom, derived from a patient X-ray CT study to simulate /sup 201/Tl SPECT data, was used in the investigation. In general, the convergence rate of the WLS-CG algorithm is about ten times that of the ML-EM algorithm. Also, the WLS-CG exhibits a faster increase in image noise at large iteration numbers than the ML-EM algorithm. >

Proceedings ArticleDOI
01 Nov 1991
TL;DR: Simulation results with still images show that a very good reconstruction can be obtained even when DC and certain low frequencies are missing in many blocks, and the proposed algorithm produces an image which is maximally smooth among all the images with the same available coefficients.
Abstract: This paper presents a new technique for image reconstruction from a partial set of transform coefficients in DCT image coders. By utilizing the correlation between adjacent blocks and imposing smoothing constraints on the reconstructed image, the proposed algorithm produces an image which is maximally smooth among all the images with the same available coefficients. The optimal solution can be obtained either by a linear transformation or through an iterative process. The underlying principle of the algorithm is applicable to any unitary block-transform and is very effective for recovering the DC and other low frequency coefficients. Applications in DCT based still image codecs and video codecs using motion compensation have been considered. Simulation results with still images show that a very good reconstruction can be obtained even when DC and certain low frequencies are missing in many blocks.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: It is demonstrated that a feasibility stopping criterion controls the noise in a reconstructed image, but is insensitive to quantitation errors, and that the use of an appropriate overrelaxation parameter can accelerate the convergence of the ML-based method during the iterative process without quantitative instabilities.
Abstract: Emission computerised tomography images reconstructed using a maximum likelihood-expectation maximization (ML)-based method with different reconstruction kernels and 1-200 iterations are compared to images reconstructed using filtered backprojection (FBP). ML-based reconstructions using a single pixel (SP) kernel with or without a sieve filter show no quantitative advantage over FBP except in the background where a reduction of noise is possible if the number of iterations is kept small ( >

Journal ArticleDOI
TL;DR: In this article, an image reconstruction method motivated by positron emission tomography (PET) is discussed and an iterative approach which requires the solution of simple quadratic equations is proposed.
Abstract: An image reconstruction method motivated by positron emission tomography (PET) is discussed. The measurements tend to be noisy and so the reconstruction method should incorporate the statistical nature of the noise. The authors set up a discrete model to represent the physical situation and arrive at a nonlinear maximum a posteriori probability (MAP) formulation of the problem. An iterative approach which requires the solution of simple quadratic equations is proposed. The authors also present a methodology which allows them to experimentally optimize an image reconstruction method for a specific medical task and to evaluate the relative efficacy of two reconstruction methods for a particular task in a manner which meets the high standards set by the methodology of statistical hypothesis testing. The new MAP algorithm is compared to a method which maximizes likelihood and with two variants of the filtered backprojection method. >

01 Jan 1991
TL;DR: The authors set up a discrete model to represent the physical situation and arrive at a nonlinear maximum a posteriori probability (MAP) formulation of the problem and present a methodology which allows them to experimentally optimize an image reconstruction method for a specific medical task.

Journal ArticleDOI
02 Nov 1991
TL;DR: In this article, the Gibbs penalty was used to guide spatially variant regularization for tomographic image reconstruction, and the penalty weights were determined from structural side information, such as the locations of anatomical boundaries in high-resolution magnetic resonance images.
Abstract: The authors report a preliminary investigation of a spatially variant penalized-likelihood method for tomographic image reconstruction based on a Gibbs penalty. The penalty weights are determined from structural side information, such as the locations of anatomical boundaries in high-resolution magnetic resonance images. Such side information will be imperfect in practice, and a simple simulation demonstrates the importance of accounting for the errors in boundary locations. The authors discuss methods for prescribing the penalty weights when the side information is noisy. Simulation results suggest that even imperfect side information is useful for guiding spatially variant regularization. >

Journal ArticleDOI
TL;DR: It is shown that the EM algorithm can be efficiently parallelized using the (modified) partition-by-box scheme with the message passing model.
Abstract: The EM algorithm for PET image reconstruction has two major drawbacks that have impeded the routine use of the EM algorithm: the long computation time due to slow convergence and a large memory required for the image, projection, and probability matrix. An attempt is made to solve these two problems by parallelizing the EM algorithm on multiprocessor systems. An efficient data and task partitioning scheme, called partition-by-box, based on the message passing model is proposed. The partition-by-box scheme and its modified version have been implemented on a message passing system, Intel iPSC/2, and a shared memory system, BBN Butterfly GP1000. The implementation results show that, for the partition-by-box scheme, a message passing system of complete binary tree interconnection with fixed connectivity of three at each node can have similar performance to that with the hypercube topology, which has a connectivity of log/sub 2/ N for N PEs. It is shown that the EM algorithm can be efficiently parallelized using the (modified) partition-by-box scheme with the message passing model. >

Journal ArticleDOI
Reiner Eschbach1
TL;DR: Different versions of the ED algorithm are compared for their pplication to computer-generated holography with respect to reconstruction errors and the overall brightness of the reconstruction.
Abstract: Error diffusion (ED) is a powerful tool for the generation of binary computer-generated holograms (CGH's). Several modifications of the original ED algorithm have been proposed to incorporate special requirements and assumptions present in CGH's. This paper compares different versions of the algorithm for their pplication to computer-generated holography with respect to reconstruction errors and the overall brightness of the reconstruction.

Journal ArticleDOI
TL;DR: The multigrid implementation was found to accelerate the convergence rate of high-frequency components of the image when the image possessed the local smoothness property, but in other cases it was unhelpful, and may even slow down the converge rate.
Abstract: The numerical behavior of multigrid implementations of the Landweber, generalized Landweber, ART, and MLEM iterative image reconstruction algorithms is investigated. Comparisons between these algorithms, and with their single-grid implementations, are made on two small-scale synthetic PET systems, for phantom objects exhibiting different characteristics, and on one full-scale synthetic system, for a Shepp-Logan phantom. The authors also show analytically the effects of noise and initial condition on the generalized Landweber iteration, and note how to choose the shaping operator to filter out noise in the data, or to enhance features of interest in the reconstructed image. Original contributions include (1) numerical studies of the convergence rates of single-grid and multigrid implementations of the Landweber, generalized Landweber, ART, and MLEM iterations and (2) effects of noise and initial condition on the generalized Landweber iteration, with procedures for filtering out noise or enhancing image features. >

Journal ArticleDOI
TL;DR: A Kalman filter for optimal restoration of multichannel images is presented, derived using a multi-channel semicausal image model that includes between-channel degradation.
Abstract: A Kalman filter for optimal restoration of multichannel images is presented. This filter is derived using a multichannel semicausal image model that includes between-channel degradation. Both stationary and nonstationary image models are developed. This filter is implemented in the Fourier domain and computation is reduced from O( Lambda /sup 3/N/sup 3/M/sup 4/) to O( Lambda /sup 3/N/sup 3/M/sup 2/) for an M*M N-channel image with degradation length Lambda . Color (red, green, and blue (RGB)) images are used as examples of multichannel images, and restoration in the RGB and YIQ domains is investigated. Simulations are presented in which the effectiveness of this filter is tested for different types of degradation and different image model estimates. >


Journal ArticleDOI
TL;DR: The present study verifies and thus calibrates these methods by determining true numbers of ganglion cells in serial reconstructions and then using each method to estimate the same populations, finding that the estimates are consistently low when h is minimal (reference and look‐up sections are adjacent).
Abstract: The empirical and disector methods are unbiased sampling methods for determining numbers of neurons. The present study verifies and thus calibrates these methods by determining true numbers of ganglion cells in serial reconstructions and then using each method to estimate the same populations. The empirical method gives accurate counts but is laborious (inefficient). Five separate disector analyses, distinguished by height (h), were done for each ganglion. The findings are: (1) that the estimates are consistently low when h is minimal (reference and look-up sections are adjacent), but (2) the estimates are accurate when h is greater (one to four sections intervene between reference and look-up sections). We ascribe the difficulties with the first disector to "lost" or "invisible" caps. We emphasize that we would not have known of the problem unless we verified our counts. If there is suspicion that difficulties with profile recognition might occur, we recommend that serial sections of an appropriately chosen sample of tissue be prepared and 500-1,000 neurons (or, more generally, particles) be reconstructed. Then the method of choice can be used on the issue of choice to make certain of the necessary accuracy before proceeding with the main study.

Journal ArticleDOI
TL;DR: It is shown through analytic calculations that the average optical transfer function (OTF) is significant for high spatial frequencies in the case of imaging through atmospheric turbulence with an adaptive optics system composed of a Hartmann-type wave-front sensor and a deformable mirror possessing far fewer actuators than one per atmospheric coherence diameter.
Abstract: The use of limited degree-of-freedom adaptive optics in conjunction with statistical averaging and a linear image reconstruction algorithm is addressed. Image reconstruction is traded for full predetection compensation. It is shown through analytic calculations that the average optical transfer function (OTF) is significant for high spatial frequencies in the case of imaging through atmospheric turbulence with an adaptive optics system composed of a Hartmann-type wave-front sensor and a deformable mirror possessing far fewer actuators than one per atmospheric coherence diameter (r(0)). Statistical averaging is used to overcome the effects of measurement noise and randomness in individual realizations of the OTF. The imaging concept and signal-to-noise considerations are presented.

Journal ArticleDOI
TL;DR: It is shown that accurate reconstruction of bifurcations is achievable with parametric models and an efficient optimization algorithm for object estimation is presented and its performance on simulated, phantom, and in vivo magnetic resonance angiograms is demonstrated.
Abstract: By exploiting a priori knowledge of arterial shape and smoothness, subpixel accuracy reconstructions are achieved from only four noisy projection images. The method incorporates a priori knowledge of the structure of branching arteries into a natural optimality criterion that encompasses the entire arterial tree. An efficient optimization algorithm for object estimation is presented, and its performance on simulated, phantom, and in vivo magnetic resonance angiograms is demonstrated. It is shown that accurate reconstruction of bifurcations is achievable with parametric models. >

Patent
07 Jun 1991
TL;DR: In this paper, a plurality of contact electrodes, located around the thorax, are used for real-time imaging, with simultaneous measurement of voltage differences at adjacent pairs of electrodes, and measurement and division of the drive current into the voltage measurements before image reconstruction.
Abstract: A method of, and apparatus for, real-time imaging are described employing a plurality of contact electrodes (1) located for example around the thorax (2) with simultaneous measurement of voltage differences at adjacent pairs of electrodes (1), digital demodulation of the voltages, and measurement and division of the drive current into the voltage measurements before image reconstruction by the use of transputers (20, 22, 24 and 26)

Proceedings ArticleDOI
02 Nov 1991
TL;DR: A fast data-acquisition method for 3D X-ray computed tomography is proposed in this paper, which continuously rotates a cone-beam Xray source and a 2D detector constructed by stacking 1-D circular detector arrays.
Abstract: A fast data-acquisition method for 3-D X-ray computed tomography is proposed The method continuously rotates a cone-beam X-ray source and a 2-D detector constructed by stacking 1-D circular detector arrays for the fan-beam computed tomography Simultaneously, a patient is translated in the field of view of the X-ray source This method is called helical-san, because it provides cone-beam projections measured by moving an X-ray source on a helix surrounding the patient An approximate convolution backprojection image reconstruction algorithm for the helical-scan is developed by extending LA Feldkamp's (1984) cone-beam reconstruction algorithm, for the circular-scan How one should choose geometrical parameters of the data-acquisition system is discussed Furthermore, the performance of the proposed method is analyzed by evaluating the point spread function of the reconstruction algorithm >