scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 1989"


Journal ArticleDOI
TL;DR: The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images and the results are compared to those obtained with other methods.
Abstract: Blood vessels usually have poor local contrast, and the application of existing edge detection algorithms yield results which are not satisfactory. An operator for feature extraction based on the optical and spatial properties of objects to be recognized is introduced. The gray-level profile of the cross section of a blood vessel is approximated by a Gaussian-shaped curve. The concept of matched filter detection of signals is used to detect piecewise linear segments of blood vessels in these images. Twelve different templates that are used to search for vessel segments along all possible directions are constructed. Various issues related to the implementation of these matched filters are discussed. The results are compared to those obtained with other methods. >

1,692 citations


Journal ArticleDOI
TL;DR: A generalized expectation-maximization (GEM) algorithm is developed for Bayesian reconstruction, based on locally correlated Markov random-field priors in the form of Gibbs functions and on the Poisson data model, which reduces to the EM maximum-likelihood algorithm.
Abstract: A generalized expectation-maximization (GEM) algorithm is developed for Bayesian reconstruction, based on locally correlated Markov random-field priors in the form of Gibbs functions and on the Poisson data model. For the M-step of the algorithm, a form of coordinate gradient ascent is derived. The algorithm reduces to the EM maximum-likelihood algorithm as the Markov random-field prior tends towards a uniform distribution. Three different Gibbs function priors are examined. Reconstructions of 3-D images obtained from the Poisson model of single-photon-emission computed tomography are presented. >

674 citations


Journal ArticleDOI
TL;DR: An estimation concept for determination of the fractal dimension based upon the concept of fractional Brownian motion is discussed and a normalized fractionalBrownian motion feature vector is defined from this estimation concept.
Abstract: Following B.B. Mandelbrot's fractal theory (1982), it was found that the fractal dimension could be obtained in medical images by the concept of fractional Brownian motion. An estimation concept for determination of the fractal dimension based upon the concept of fractional Brownian motion is discussed. Two applications are found: (1) classification; (2) edge enhancement and detection. For the purpose of classification, a normalized fractional Brownian motion feature vector is defined from this estimation concept. It represented the normalized average absolute intensity difference of pixel pairs on a surface of different scales. The feature vector uses relatively few data items to represent the statistical characteristics of the medial image surface and is invariant to linear intensity transformation. For edge enhancement and detection application, a transformed image is obtained by calculating the fractal dimension of each pixel over the whole medical image. The fractal dimension value of each pixel is obtained by calculating the fractal dimension of 7*7 pixel block centered on this pixel. >

430 citations


Journal ArticleDOI
TL;DR: The ANALYZE software system, which permits detailed investigation and evaluation of 3-D biomedical images, is discussed, which is unique in its synergistic integration of fully interactive modules for direct display, manipulation, and measurement of multidimensional image data.
Abstract: The ANALYZE software system, which permits detailed investigation and evaluation of 3-D biomedical images, is discussed. ANALYZE can be used with 3-D imaging modalities based on X-ray computed tomography, radionuclide emission tomography, ultrasound tomography, and magnetic resonance imaging. The package is unique in its synergistic integration of fully interactive modules for direct display, manipulation, and measurement of multidimensional image data. One of the most versatile and powerful capabilities in ANALYZE is image volume rendering for 3-D display. An important advantage of this technique is that it can be used to display 3-D images directly from the original data set and to provide on-the-fly combinations of selected image transformations, such as surface segmentation, cutting planes, transparency, and/or volume set operations (union, intersection, difference, etc.). The module has been optimized to be fast (interactive) without compromising image quality. The software is written entirely in C and runs on standard UNIX workstations. >

366 citations


Journal ArticleDOI
TL;DR: A method for detecting one type of breast tumor, circumscribed masses, in mammograms is presented, which relies on a combination of criteria used by experts, including the shape, brightness contrast, and uniform density of tumor areas.
Abstract: A method for detecting one type of breast tumor, circumscribed masses, in mammograms is presented. It relies on a combination of criteria used by experts, including the shape, brightness contrast, and uniform density of tumor areas. The method uses modified median filtering to enhance mammogram images and template matching to detect the tumors. In the template matching step, suspicious areas are identified by thresholding the cross-correlation values, and a percentile method is used to determine a threshold for each film. In addition, two tests are used to remove false alarms from the resulting candidates. The results obtained by applying these techniques to a set of test images are described. They are judged encouraging. >

298 citations


Journal ArticleDOI
TL;DR: A comprehensive methodology for image segmentation is presented, applied to nuclear magnetic resonance images of the brain, and the results of volumetric calculations for the cerebral cortex, white matter, cerebellum, ventricular system, and caudate nucleus are presented.
Abstract: A comprehensive methodology for image segmentation is presented. Tools for differential and intensity contouring, and outline optimization are discussed, as well as the methods for automating such procedures. After segmentation, regional volumes and image intensity distributions can be determined. The methodology is applied to nuclear magnetic resonance images of the brain. Examples of the results of volumetric calculations for the cerebral cortex, white matter, cerebellum, ventricular system, and caudate nucleus are presented. An image intensity distribution is demonstrated for the cerebral cortex. >

290 citations


Journal ArticleDOI
TL;DR: Computer simulation studies are presented which demonstrate significantly improved reconstructed images achieved by an ART algorithm as compared to IRR methods.
Abstract: The author presents an algebraic reconstruction technique (ART) as a viable alternative in computerized tomography (CT) from limited views. Recently, algorithms of iterative reconstruction-reprojection (IRR) based on the method of convolution-backprojection have been proposed for application in limited-view CT. Reprojection was used in an iterative fashion alternating with backprojection as a means of estimating projection values within the sector of missing views. In algebraic methods of reconstruction for CT, only those projections corresponding to known data are required. Reprojection along missing views would merely serve to introduce redundant equations. Computer simulation studies are presented which demonstrate significantly improved reconstructed images achieved by an ART algorithm as compared to IRR methods. >

290 citations


Journal ArticleDOI
TL;DR: This is the first implementation of a relaxation algorithm for edge detection in echocardiograms that compounds spatial and temporal information along with a physical model in its decision rule, whereas most other algorithms base their decisions on spatial data alone.
Abstract: An automatic algorithm has been developed for high-speed detection of cavity boundaries in sequential 2-D echocardiograms using an optimization algorithm called simulated annealing (SA). The algorithm has three stages. (1) A predetermined window of size n*m is decimated to size n'*m' after low-pass filtering. (2) An iterative radial gradient algorithm is employed to determine the center of gravity (CG) of the cavity. (3) 64 radii which originate from the CG defined in stage 2 are bounded by the high-probability region. Each bounded radius is defined as a link in a 1-D, 64-member cyclic Markov random field. This algorithm is unique in that it compounds spatial and temporal information along with a physical model in its decision rule, whereas most other algorithms base their decisions on spatial data alone. This is the first implementation of a relaxation algorithm for edge detection in echocardiograms. Results attained using this algorithm on real data have been highly encouraging. >

231 citations


Journal ArticleDOI
TL;DR: A tracking algorithm for identification of vessel contours in digital coronary arteriograms was developed and validated and provided accurate measurement of lumen width and percent stenosis that was relatively invariant to the vessel's orientation, dynamic range, background variation, and degree of blurring.
Abstract: A tracking algorithm for identification of vessel contours in digital coronary arteriograms was developed and validated. Given an initial start-of-search point, the tracking process was fully automated by utilizing the spatial continuity of the vessel's centerline, orientation, diameter, and density. The incremental sections along a major vessel were sequentially identified, based on the assumptions of geometric similarity and continuation between adjacent incremental sections. The algorithm consisted of an extrapolation-update process which was guided by a matched filter. The filter parameters were adapted to the measured lumen width. The tracking process was robust and extremely efficient as indicated by test results on synthetic images, digital subtraction angiograms, and cineangiograms. The algorithm provided accurate measurement of lumen width and percent stenosis that was relatively invariant to the vessel's orientation, dynamic range, background variation, and degree of blurring. >

224 citations


Journal ArticleDOI
TL;DR: A method to quantify the motion of the heart from digitized sequences of two-dimensional echocardiograms (2-D) echos was recently proposed, but further analysis is required to determine what part of this motion is due to translation, rotation, contraction, and deformation of the myocardium.
Abstract: A method to quantify the motion of the heart from digitized sequences of two-dimensional echocardiograms (2-D) echos was recently proposed. This method computes on every point of the 2-D echoes, the 2-D apparent velocity vector (or optical flow) which characterizes its interframe motion. However, further analysis is required to determine what part of this motion is due to translation, rotation, contraction, and deformation of the myocardium. A method to locally obtain this information is presented. The proposed method assumes that the interframe velocity field U(xy), V(x,y) can be locally described by linear equations in the form U(x,y)=a+Ax+By; V(x,y)=b+Cx+Dy. The additional constraint was introduced in the computation of the local velocity field by the method of projections onto convex sets. Since this constraint is only valid locally, the myocardium must be first divided into sectors and the velocity fields computed independently for each sector. >

153 citations


Journal ArticleDOI
TL;DR: Several surface and volume rendering techniques are compared using nuclear medicine data including several new methods developed by the authors specifically for scintigraphic data to achieve the goals of three-dimensional display.
Abstract: Several surface and volume rendering techniques are compared using nuclear medicine data including several new methods developed by the authors specifically for scintigraphic data. The techniques examined are summed projection, thresholded projection, threshold-based surface illumination, volumetric compositing, maximum-activity projection, sun-weighted maximum-activity projection, and variable attenuation. The advantages and disadvantages of each method are discussed in relation to the goals of three-dimensional display, which are defined herein. Selected images are shown to illustrate the usefulness of the methods. >

Journal ArticleDOI
TL;DR: The concept of a feasible image is introduced, which is a result of a reconstruction that, if it were a radiation field, could have generated the initial projection data by the Poisson process that governs radioactive decay.
Abstract: The discussion of the causes of image deterioration in the maximum-likelihood estimator (MLE) method of tomographic image reconstruction, initiated with the publication of a stopping rule for that iterative process (E. Veklerov and J. Llacer, 1987) is continued. The concept of a feasible image is introduced, which is a result of a reconstruction that, if it were a radiation field, could have generated the initial projection data by the Poisson process that governs radioactive decay. From the premise that the result of a reconstruction should be feasible, the shape and characteristics of the region of feasibility in projection space are examined. With a new rule, reconstructions from real data can be tested for feasibility. Results of the tests and reconstructed images for the Hoffman brain phantom are shown. A comparative examination of the current methods of dealing with MLE image deterioration is included. >

Journal ArticleDOI
TL;DR: The authors detail the design and implementation of HANDX, a model-based computer vision system used in the domain of medical image processing given a digitized hand radiograph, which segments out specific bones and measures particular parameters of the bones, without requiring specific characterization of noise variations in background contrast and anatomical differences which arise from patient variation.
Abstract: The authors detail the design and implementation of HANDX, a model-based computer vision system used in the domain of medical image processing. Given a digitized hand radiograph, HANDX segments out specific bones and measures particular parameters of the bones, without requiring specific characterization of noise variations in background contrast and anatomical differences which arise from patient variation. Observer variability is reduced by the system, and the resulting measurement may be useful for detecting short-term skeletal growth abnormalities in children and may additional clinical applications. The overall system is modularized into three stages: preprocessing, segmentation, and measurement. In the preprocessing state model-based histogram modification is used to normalize the radiograph. The histogram model is based on the physics of the imaging process. The segmentation stage finds and outlines specific bones using domain-dependent and domain-independent knowledge of hand anatomy and physiology and image edges. The measurement stage obtains clinically useful quantitative parameters from the segmented image. >

Journal ArticleDOI
TL;DR: An approach to image analysis and processing, called holospectral imaging, is proposed for dealing with Compton scattering contamination in nuclear medicine imaging, and results indicate a slight increase in the statistical noise but also an increase in contrast and greatly improved ability to quantitate the image.
Abstract: An approach to image analysis and processing, called holospectral imaging, is proposed for dealing with Compton scattering contamination in nuclear medicine imaging. The method requires that energy information be available for all detected photons. A set of frames (typically 16) representing the spatial distribution at different energies is then formed. The relationship between these energy frames is analyzed, and the original data is transformed into a series of eigenimages and eigenvalues. In this space it is possible to distinguish the specific contribution to the image of both primary and scattered photons and, in addition, noise. Under the hypothesis that the contribution of the primary photons dominates the image structure, a filtering process can be performed to reduce the scattered contamination. The proportion of scattered information removed by the filtering process is evaluated for all images and depends on the level of residual quantum noise, which is estimated from the size of the smaller eigenvalues. Results indicate a slight increase in the statistical noise but also an increase in contrast and greatly improved ability to quantitate the image. >

Journal ArticleDOI
TL;DR: Comparisons of the a priori uniform and nonuniform Bayesian algorithms to the maximum-likelihood algorithm are carried out using computer-generated noise-free and Poisson randomized projections.
Abstract: A method that incorporates a priori uniform or nonuniform source distribution probabilistic information and data fluctuations of a Poisson nature is presented. The source distributions are modeled in terms of a priori source probability density functions. Maximum a posteriori probability solutions, as determined by a system of equations, are given. Interactive Bayesian imaging algorithms for the solutions are derived using an expectation maximization technique. Comparisons of the a priori uniform and nonuniform Bayesian algorithms to the maximum-likelihood algorithm are carried out using computer-generated noise-free and Poisson randomized projections. Improvement in image reconstruction from projections with the Bayesian algorithm is demonstrated. Superior results are obtained using the a priori nonuniform source distribution. >

Journal ArticleDOI
TL;DR: It is shown how to create homogeneous fields at two frequencies, using an unequal distribution of capacitance and the root-mean-square deviation of field magnitude around a circle is proposed as a measure of field inhomogeneity.
Abstract: The type of radio-frequency (RF) coil known as a high-pass birdcage consists of a set of N wires arranged axially on the surface of a cylinder and connected by capacitors at each end. Such coils are widely used for NMR imaging because of the high degree of field homogeneity they provide. It is shown how to create homogeneous fields at two frequencies, using an unequal distribution of capacitance. A theoretical analysis which uses the discrete Fourier transform of the currents with respect to the angular positions of the N wires is presented. A perturbation theory analysis indicates a small sacrifice in homogeneity. The root-mean-square deviation of field magnitude around a circle is proposed as a measure of field inhomogeneity. For the case of double resonance at proton and fluorine frequencies, the loss of homogeneity is at worst 1% and is small compared to the natural inhomogeneity for an N=8 wire coil for radii up to one half the coil radius. The presence of a conducting shield degrades the homogeneity. The theoretical ideas were confirmed in a computer simulation of one particular coil design. A working coil was constructed and images obtained of proton and fluorine phantoms. >

Journal ArticleDOI
TL;DR: Methods for estimating the regional variance in emission tomography images which arise from the Poisson nature of the raw data are discussed, based on the bootstrap and jackknife methods of statistical resampling theory.
Abstract: Methods for estimating the regional variance in emission tomography images which arise from the Poisson nature of the raw data are discussed. The methods are based on the bootstrap and jackknife methods of statistical resampling theory. The bootstrap is implemented in time-of-flight PET (positron emission tomography); the same techniques can be applied to non-time-of-flight PET and SPECT (single-photon-emission computed tomography). The estimates are validated by comparing them to those obtained by repetition of emission scans, using data from a time-of-flight positron emission tomograph. Simple expressions for the accuracy of the estimates are given. The present approach is computationally feasible and can be applied to any reconstruction technique as long as the data are acquired in a raw, uncorrected form. >

Journal ArticleDOI
TL;DR: A recent peak detection algorithm and a set of rules are applied to the image histogram to determine automatically a gray-level threshold between the lung field and mediastinum, which facilitates anatomically selective gray-scale modification and/or unsharp masking.
Abstract: A technique for automatic anatomically selective enhancement of digital chest radiographs is developed. Anatomically selective enhancement is motivated by the desire to simultaneously meet the different enhancement requirements of the lung field and the mediastinum. A recent peak detection algorithm and a set of rules are applied to the image histogram to determine automatically a gray-level threshold between the lung field and mediastinum. The gray-level threshold facilitates anatomically selective gray-scale modification and/or unsharp masking. Further, in an attempt to suppress possible white-band or black-band artifacts due to unsharp masking at sharp edges, local contrast adaptively is incorporated into anatomically selective unsharp masking by designing an anatomy-selective emphasis parameter which varies asymmetrically with positive and negative values of the local image contrast. >

Journal ArticleDOI
TL;DR: An automatic technique to estimate adaptively a subspace from the maximum space during the search process itself is described, and this adaptive technique is tested with two quite different types of search algorithms, namely, genetic algorithms and simulated annealing.
Abstract: An image registration technique for application in X-ray, gamma-ray, and magnetic resonance imaging is described. The technique involves searching a real-valued, multidimensional, rectangular, symmetric space of bilinear geometrical transformations for a globally optimal transformation. Physical considerations provide theoretical limits on the search space, but the theoretically maximum allowable space is still often much larger than the smallest rectangular symmetric subspace that contains the optimal transformation. To reduce the search time, the current practice is to guess an optimal subspace from the maximum allowable space. This reduced space is then discretized and searched. An automatic technique to estimate adaptively a subspace from the maximum space during the search process itself is described. This adaptive technique is tested with two quite different types of search algorithms, namely, genetic algorithms and simulated annealing. >

Journal ArticleDOI
TL;DR: To test the robustness of the present border detection method, computer-derived coronary borders were compared to independent standards separately for good and poor angiographic images and the accuracy of computer-identified borders was similar in the two cases.
Abstract: A method of coronary border identification is discussed that is based on graph searching principles and is applicable to the broad spectrum of angiographic image quality observed clinically. Cine frames from clinical coronary angiograms were optically magnified, digitized, and graded for image quality. Minimal lumen diameters, referenced to catheter size, were derived from automatically identified coronary borders and compared to those defined using quantitative coronary arteriography and to observer-traced borders. computer-derived minimal lumen diameters were also compared to intracoronary measurements of coronary vasodilator reserve, a measure of the functional significance of a coronary obstruction. To test the robustness of the present border detection method, computer-derived coronary borders were compared to independent standards separately for good and poor angiographic images. The accuracy of computer-identified borders was similar in the two cases. >

Journal ArticleDOI
TL;DR: Anesthetized dogs were scanned in the dynamic spatial reconstructor, a fast multislice computed tomographic scanner, and four differently radiolabeled microspheres were injected into the left atrium, with each label corresponding to a different hemodynamic condition.
Abstract: Anesthetized dogs were scanned in the dynamic spatial reconstructor, a fast multislice computed tomographic scanner. In one group of eight dogs, four differently radiolabeled microspheres (1.5 mu m diameter) were injected into the left atrium, with each label corresponding to a different hemodynamic condition. The image data collected from this group of dogs were used to develop the algorithm for estimating regional myocardial perfusion from the CT image data. In an additional 11 dogs, three differently labeled microspheres were also injected during different hemodynamic conditions. The image data collected from this second group of dogs were used to prospectively evaluate the accuracy of the algorithm developed from data in the first group of dogs. The results are presented and discussed. >

Journal ArticleDOI
TL;DR: An algorithm which utilizes digital image processing and pattern recognition methods for automated definition of left ventricular (LV) contours and model-based correction proved to be an effective technique, producing significant reduction of error in the final contours.
Abstract: An algorithm which utilizes digital image processing and pattern recognition methods for automated definition of left ventricular (LV) contours is presented. Digital image processing and pattern recognition techniques are applied to digitally acquired radiographic images of the heart to extract the LV contours required for quantitative analysis of cardiac function. Knowledge of the image domain is invoked at each step of the algorithm to orient the data search and thereby the complexity of the solution. A knowledge-based image transformation, directional gradient search, expectations of object versus background location, least-cost path searches by dynamic programming, and a digital representation of possible versus impossible ventricular shape are exploited. The digital representation, composed of a set of characteristic templates, was created using contours obtained by manual tracing. The algorithm was tested by application of three sets of 25 images each. Test set one and two were used as training sets for creation of the model for contour correction. Model-based correction proved to be an effective technique, producing significant reduction of error in the final contours. >

Journal ArticleDOI
TL;DR: Three novel methods are used to reconstruct a simulated image from a set of incomplete data spanning a 160 degrees angular range using the squashing affine transformation, the circular interpolation method derived from the theory of J.A. Shepp, and the geometry-free reconstruction using the Theory of convex projections.
Abstract: Three novel methods are used to reconstruct a simulated image from a set of incomplete data spanning a 160 degrees angular range. These methods are the squashing affine transformation of J.A. Reeds and L.A. Shepp (1987), the circular interpolation method derived from the theory of J.J. Clark, M.R. Plamer, and P.D. Lawrence (1985), and the geometry-free reconstruction using the theory of convex projections. These methods are briefly explained, and their reconstructions are compared for the case of limited angular views. >

Journal ArticleDOI
TL;DR: An optical model for imaging the retina through cataracts has been developed and a homomorphic Weiner filter can be designed that will optimally restore the cataractous image (in the mean-square-error sense).
Abstract: An optical model for imaging the retina through cataracts has been developed. The images are treated as sample functions of stochastic processes. On the basis of the model a homomorphic Weiner filter can be designed that will optimally restore the cataractous image (in the mean-square-error sense). The design of the filter requires a priori knowledge of the statistics of either the cataract transmittance function or the noncataractous image. The cataract transmittance function, assumed to be low pass in nature, can be estimated from the cataractous image of the retina. The statistics of the noncataractous image can be estimated using an old, precataractous photograph of the same retina, which is frequently available. Various modes of this restoration concept were applied to clinical photographs and found to be effective. The best results were obtained with short-space enhancement using averaged short-space estimates of the spectra of the two images. >

Journal ArticleDOI
TL;DR: A method is presented for the automated extraction of myocardial borders in M-mode echocardiograms that uses a maximum tracking procedure whose performances are improved by utilizing a local model to predict the position of the next border point.
Abstract: A method is presented for the automated extraction of myocardial borders in M-mode echocardiograms. The successive steps of processing are: preprocessing for noise reduction, enhancement of border characteristics using a set of suitably chosen matched filters, and final extraction of border points by searching for optimal paths along the time axis. During the last step of processing, the contribution of each elementary border element is characterised by a normalized correlation coefficient. The optimal path, defined as the one that maximizes the sum of all elementary contributions, is determined efficiently using dynamic programming. An alternative approach uses a maximum tracking procedure whose performances are improved by utilizing a local model to predict the position of the next border point. Experimental examples are presented and the performance of both border extraction algorithms are compared. >

Journal ArticleDOI
TL;DR: A method is proposed which makes more efficient use of the available photons by including both oblique and transverse section in the reconstruction of a positron-emitting radioscope by centering a scaled convolution filter on each detected coincidence event line and backprojecting the filter values through the three-dimensional reconstruction volume.
Abstract: Conventional multislice positron cameras reconstruct a three-dimensional distribution of a positron-emitting radioscope as a set of two-dimensional transverse sections. Consequently, annihilation photons which cross two or more planes are eliminated from the data. Such an approach makes efficient use of the emitted photon flux. A method is proposed which makes more efficient use of the available photons by including both oblique and transverse section in the reconstruction. The implementation of the method consists of centering a scaled convolution filter on each detected coincidence event line and backprojecting the filter values through the three-dimensional reconstruction volume. The final image is normalized to allow for the different number of oblique and transverse sections that contribute to each point in the imaging volume. The method has been evaluated using both simulated data and measured data obtained with a routing area detector positron camera. >

Journal ArticleDOI
TL;DR: It is shown that the discretization of the filtered backprojection process can cause the tomographic transfer function to be anisotropic and nonstationary, however, through proper selection of the methods used in reconstruction, a nearly isotropic and stationary MTF can be obtained.
Abstract: A mathematical expression for the modulation transfer function (MTF) of image reconstruction by discrete filtered backprojection (DFBP) is derived. A simulation study is used to investigate the dependence of the MTF of DFBP on: (1) the number of projection views; (2) the type of ramp filter used; (3) the interpolation method used during backprojection; and (4) the position of the object. These results were compared to MTFs calculated from point-source single-photon-emission computed tomographic (SPECT) acquisitions in air. The experimentally obtained MTFs contained much of the same structure as the MTFs of DFBP obtained through simulation. It is shown that the discretization of the filtered backprojection process can cause the tomographic transfer function to be anisotropic and nonstationary. However, through proper selection of the methods used in reconstruction, a nearly isotropic and stationary MTF can be obtained. >

Journal ArticleDOI
TL;DR: A projection operator that simultaneously projects onto the set of all functions satisfying raysum constraints in parallel-beam CT, which realizes the ART (algebraic reconstruction technique) in one step.
Abstract: The authors develop a projection operator that simultaneously projects onto the set of all functions satisfying raysum constraints in parallel-beam CT. The projector can be directly extended to the fan-beam case through the process of rebinning. The projector generates a solution that is closest to the initial estimate among all the functions that are consistent with the available raysum data. It realizes the ART (algebraic reconstruction technique) in one step. The projector furnishes the one-step projection reconstruction (OSPR) for any arbitrary configurations of missing data. Because the projection is one-step, there can be a significant reduction in the number of online computations and memory requirements, especially when the missing data exhibit some pattern within a view or between views. >

Journal ArticleDOI
TL;DR: Experiments were conducted using a Siemens Rota camera to study the applicability of two linear shift-invariant (LSI) filters, namely, the Wiener and power spectrum equalization filters, for restoration of planar projections and single-photon-emission computed tomography (SPECT) images.
Abstract: Experiments were conducted using a Siemens Rota camera to study the applicability of two linear shift-invariant (LSI) filters, namely, the Wiener and power spectrum equalization filters, for restoration of planar projections and single-photon-emission computed tomography (SPECT) images. In the restoration scheme, the system transfer function, computed from a line source image, is modeled by a 2-D Gaussian function. The noise power spectrum is modeled as a constant for planar images and as a ramp for SPECT images. The filters have been applied to restore computer-simulated 1-D and 2-D projections and SPECT images of two simple phantoms, 2-D projections of two phantoms obtained from the Siemens Rota camera, and SPECT images of a cardiac phantom obtained from the Siemens Rota camera. The filters are shown to perform partial restoration. Considerable noise suppression and detail enhancement have been observed in the restored images. quantitative measurements such as root-mean-squared error and contrast ratio have been used for objective analysis of the results, which are encouraging. >

Journal ArticleDOI
TL;DR: To determine the seriousness of the scatter problem and how effective scatter correction is at reducing scatter's deleterious effects, dual-energy imaging in the presence of scatter is simulated.
Abstract: Dual-energy imaging provides images in which the conspicuity of the signal of interest is heightened by selectively cancelling intervening structures. Area detectors for dual-energy imaging offer some advantages over line-scanning systems because they make efficient use of the source. Area detectors, however, collect scattered radiation. To determine the seriousness of the scatter problem and how effective scatter correction is at reducing scatter's deleterious effects, dual-energy imaging in the presence of scatter is simulated. The coefficients are modified so that the intervening material and the scatter are cancelled in some particular region of the image. Results for simulations of two clinically important material-subtraction-the bone-subtraction image and the soft-tissue-subtraction image-are presented. The effects of scatter on contrast, noise variance, and SNR for the two subtractions are examined. >