scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 1983"


Journal ArticleDOI
TL;DR: In this paper, the authors compared the performance of linear and cubic B-spline interpolation, linear interpolation and high-resolution cubic spline with edge enhancement with respect to the initial coordinate system.
Abstract: When resampling an image to a new set of coordinates (for example, when rotating an image), there is often a noticeable loss in image quality. To preserve image quality, the interpolating function used for the resampling should be an ideal low-pass filter. To determine which limited extent convolving functions would provide the best interpolation, five functions were compared: A) nearest neighbor, B) linear, C) cubic B-spline, D) high-resolution cubic spline with edge enhancement (a = -1), and E) high-resolution cubic spline (a = -0.5). The functions which extend over four picture elements (C, D, E) were shown to have a better frequency response than those which extend over one (A) or two (B) pixels. The nearest neighbor function shifted the image up to one-half a pixel. Linear and cubic B-spline interpolation tended to smooth the image. The best response was obtained with the high-resolution cubic spline functions. The location of the resampled points with respect to the initial coordinate system has a dramatic effect on the response of the sampled interpolating function?the data are exactly reproduced when the points are aligned, and the response has the most smoothing when the resampled points are equidistant from the original coordinate points. Thus, at the expense of some increase in computing time, image quality can be improved by resampled using the high-resolution cubic spline function as compared to the nearest neighbor, linear, or cubic B-spline functions.

844 citations


Journal ArticleDOI
TL;DR: The main conclusion drawn from this computer simulation study is that even when object inhomogeneities are as small as 5 percent of the background, multiple scattering can introduce severe distortions in multicomponent objects.
Abstract: In this paper, we have first presented a new computational procedure for the calculation of the "true" forward scattered fields of a multicomponent object. By "true" we mean fields that are not limited by the first-order approximations, such as those used in the first-order Born and Rytov calculations. Although the results shown will only include the second-order fields for a multicomponent object, the computational procedure can easily be generalized for higher order scattering effects. Using this procedure we have shown by computer simulation that even when each component of a two-component object is weakly scattering, the multiple scattering effects become important when the components are blocking each other. We have further shown that when strongly scattering components that are large compared to a wavelength are not blocking each other, the scattering effects can be ignored. Both these conclusions agree with intuitive reasoning. Since all the currently available diffraction tomography algorithms are based on the assumption that the object satisfies the first-order scattering assumption, it is interesting to test them under conditions when this assumption is violated. We have used the scattered fields obtained with the new computational procedure to test these algorithms, and shown the resulting artifacts. Our main conclusion drawn from this computer simulation study is that even when object inhomogeneities are as small as 5 percent of the background, multiple scattering can introduce severe distortions in multicomponent objects.

79 citations


Journal ArticleDOI
TL;DR: Two previously published postreconstruction beam hardening correction methods are described within a common framework and are compared from the points of view of the nearness of the corrected polychromatic projection data to the desired monochromatic projectionData and the visual quality of the reconstructions.
Abstract: The general nature of postreconstruction beam hardening correction methods is discussed. A methodology for choosing the energy of reconstruction is presented based on a technique of evaluating the "nearness" of two projection data sets. Two previously published postreconstruction beam hardening correction methods are described within a common framework. These methods differ at a number of independent places and so one can produce hybrid methods by interchanging some but not all of the choices. A basic difference between the methods is that one needs only the initial reconstruction during the postreconstruction correcting phase, while the other needs the original projection data as well. Both methods have been implemented and are compared (using a mathematical head phantom) from the points of view of the nearness of the corrected polychromatic projection data to the desired monochromatic projection data and the visual quality of the reconstructions. Variants and hybrids of the two methods are also investigated and recommendations based on the results are presented.

71 citations


Journal ArticleDOI
TL;DR: An algorithm for the automatic contour detection of objects in CT images is presented, which requires little memory space, allows easy incorporation of different types of local digital filters, and is fast.
Abstract: An algorithm for the automatic contour detection of objects in CT images is presented. It requires little memory space, allows easy incorporation of different types of local digital filters, and is fast. Typical computation time with a minicomputer is less than one second. An implementation of the algorithm for the detection of bone contours from low dose CT images is given as an example.

51 citations


Journal ArticleDOI
TL;DR: Through this extended TTR (ETTR) algorithm, it is now possible not only to reconstruct an image of a larger object but also possible to obtain images which have substantially better signal-to-noise ratio.
Abstract: The true three-dimensional reconstruction (TTR) algorithm previously proposed by the authors is extended to an algorithm with which full utilization of all the oblique rays is possible. Through this extended TTR (ETTR) algorithm, it is now possible not only to reconstruct an image of a larger object but also possible to obtain images which have substantially better signal-to-noise ratio. The basic TTR algorithm, as well as its extended version, will be discussed together with computer simulation results. In the appendixes, a new two-dimensional Fourier domain weighting function necessary for the implementation of the TTR and ETTR algorithms as well as the generality of the proposed TTR algorithm are discussed.

50 citations


Journal ArticleDOI
TL;DR: Digital processing that increases resolution by spatial deconvolution and histogram-based amplitude mapping has been used to improve ultrasonic abdominal image quality and produced resolution improvements and contrast changes to demonstrate more detail in the images.
Abstract: Digital processing that increases resolution by spatial deconvolution and histogram-based amplitude mapping has been used to improve ultrasonic abdominal image quality. The processing was applied to pulse-echo ultrasound data obtained from clinical imaging instrumentation modified to permit digital recording of signals in either RF or video forms for subsequent off-line analysis. Spatial deconvolution was accomplished both along the axis and across the width of the ultrasonic beam. Axial deconvolution was carried out on RF data with a point spread function derived from the echo of a wire target. Lateral deconvolution was performed on the video envelope placed in a matrix by an inverse filter with parameters that adjust themselves to the spatial frequency content of the image being processed. Resultant image amplitudes were mapped into a hyperbolic distribution to increase image contrast for improved demonstration of low amplitudes. The combination of processing produced resolution improvements to show boundaries more sharply and contrast changes to demonstrate more detail in the images.

46 citations


Journal ArticleDOI
TL;DR: A Monte Carlo simulation of the gamma ray transport within a single-slice positron emission tomograph has been generated to study the effects of system parameters on performance, and it is concluded that small radius rings are better suited for low dose-rate static studies, while largerradius rings are preferred for high dose- rate dynamic studies.
Abstract: A Monte Carlo simulation of the gamma ray transport within a single-slice positron emission tomograph has been generated to study the effects of system parameters on performance. Included in the simulation are the radioactive source distribution, collimators, and detectors with intercrystal septa. Data are first presented to show the coincidence and singles sensitivities as a function of ring radius. Then, for a fixed radius of 26 cm, the variation of sensitivities are shown as a function of the following variables: slice thickness, patient port size, intercrystal septum dimensions, lower energy discriminator level, and coincidence fan angle. Simulation-generated sensitivity data are compared with experimental values for several tomographs andgood agreement is obtained. Discrepancies between two definitions used in experimentally determining scatter fractions are discussed. The Monte Carlo simulation shows that small radii rings have an effective count rate (quality factor) that is more than 90 percent of that for larger rings at low and moderate activity levels (? 0.25 ?Ci·cm-3), contrary to what is predicted from analytical calculations. It is concluded that small radius rings are better suited for low dose-rate static studies, while larger radius rings are preferred for high dose-rate dynamic studies.

46 citations


Journal ArticleDOI
TL;DR: In this approach called measurement-dependent filtering, the low spatial frequencies are derived from the selective image and the high frequencies from a nonselective combination of the measurements which has a greater SNR.
Abstract: Recently, a variety of medical imaging systems have been introduced involving selective imaging using multiple measurements. In these systems a number of independent measurements, taken at different times and/or using different X-ray energies, are combined to form a selective image. A prime example is the selective imaging of iodine for vessel imaging. These systems, involving subtraction operations, result in a degradation of the SNR, as compared to the individual measurements. In this approach called measurement-dependent filtering, the low spatial frequencies are derived from the selective image and the high frequencies from a nonselective combination of the measurements which has a greater SNR. The combination provides a significantly improved SNR with the original resolution and a degree of "conspicuity" essentially equal to selective image.

32 citations


Journal ArticleDOI
TL;DR: It is found that the engineering falls short of the natural limitations by an inefficiency of about a factor two for most of the individual radiologic system components, allowing for great savings in the exposure required for a given imaging performance when the entire system is optimized.
Abstract: The physical sensitivity of a medical imaging system is defined as the square of the output signal-to-noise ratio per unit of radiation to the patient, or the information/radiation ratio. This sensitivity is analyzed at two stages: the radiation detection stage, and the image display stage. The signal-to-noise ratio (SNR) of the detection stage is a physical measure of the statistical quality of the raw detected data in the light of the imaging task to be performed. As such it is independent of any software or image processing algorithms which belong properly to the display stage. The fundamental SNR approach is applied to a wide variety of medical imaging applications and measured SNR values for signal detection at a given radiation exposure level are compared to the optimal values allowed by nature. It is found that the engineering falls short of the natural limitations by an inefficiency of about a factor two for most of the individual radiologic system components, allowing for great savings in the exposure required for a given imaging performance when the entire system is optimized. The display of the detected information is evaluated from the point of view of observer efficiency, the fraction of the displayed information that a human observer actually extracts.

30 citations


Journal ArticleDOI
TL;DR: This work has related the properties of the selected slice, such as its width, its rate of attenuation, and its side lobes to the amplitude modulation of the radio frequency pulse and used computer simulation to emphasize the qualitative relationship between the properties.
Abstract: If NMR imaging uses selective excitation, then it has to do it correctly as it affects the image resolution. We have related the properties of the selected slice, such as its width, its rate of attenuation, and its side lobes to the amplitude modulation of the radio frequency pulse. An interesting observation was that multiplying sinc(t) function by a triangle can give better results than multiplying it with a Gauss function. We have used computer simulation and tried to emphasize the qualitative relationship between the properties. We conclude with some practical requirements and show the need of a numerical design procedure if the slice properties have to be optimized.

19 citations


Journal ArticleDOI
TL;DR: A new technique is used to interpolate the sampled CT image data in the axial direction for a coronal display that compensates the high spatial frequency components in that direction to get a narrower point-spread function.
Abstract: In this paper a new technique is used to interpolate the sampled CT image data in the axial direction for a coronal display. This technique also compensates the high spatial frequency components in that direction to get a narrower point-spread function. Computer simulation results are presented here to show the effect of aperture convolution and the effect of spatial sampling in a practical imaging system. It is illuminating to describe the procedures of interpolation in terms of digital filtering. The advantage of restoring spline interpolation is due to the extra frequency compensation in the matrix inversion step. Both step function and sinusoidal function are used in the simulation. The enhanced transition edge and the smaller absolute error in the restoring spline interpolated results are shown. The absolute error depends to a certain extent on the sampling interval relative to the Nyquist interval which is also discussed. There is a small amount of amplification of existing noise or generation of new noise in this technique. Some initial results on a CT image using this technique are also presented.

Journal ArticleDOI
TL;DR: It was found that the black and white display was significantly "better" than the color display and there was no significant difference in the time required to make a decision between the two displays.
Abstract: A black and white display was compared to a color display using the heated object spectrum with the aim of verifying which display was better for the detection of small abnormalities in a reasonably complicated background. Receiver operating characteristic curves with location (LROC) were generated for each type of display for each observer. The data were analyzed by taking the LROC curves in pairs, for each observer, fitting I binary ROC curve using a model, and using both parametric and nonparametric statistical tests for the paired differences. It was found, in this study, that the black and white display was significantly "better" than the color display. There was no significant difference in the time required to make a decision between the two displays. The technique of using paired ROC curves is suggested as being an appropriate and powerful test for intercomparing different imaging procedures and techniques in medicine.


Journal ArticleDOI
TL;DR: An image analysis system for automated tracking of luminal edges and measurement of diameter form cine frames digitized by a video camera/digitizer interfaced to a Vax 11/780 computer, indicating that as few as three to five radiographic views may be useful in reconstructing coronary luminal shape.
Abstract: Knowledge of coronary luminal shape, in addition to diameter information as routinely obtained from a cineangiogram, may be useful in assessing lesions which deviate from circular symmetry. We have developed an image analysis system for automated tracking of luminal edges and measurement of diameter form cine frames digitized by a video camera/digitizer interfaced to a Vax 11/780 computer. Between vessel edges, cinedensitometric profiles across the vessel long axis are used to provide a rotationally invariant measure of relative luminal cross-sectional area. A maximum entropy iterative algorithm is used to reconstruct the lumen cross section from a set of projection data consisting of the cinedensitometric profiles from multiple radiographic views. Nonaxisymmetric model coronary lumena, such as a crescent shape and a double lumen simulating a coronary artery dissection, were filmed under cineradiographic conditions similar to clinical exposures. Radiographic views at 10° increments about the model lumen long axis over 360° were available for analysis. Graphic display of reconstructed model lumena indicate that as few as three to five radiographic views may be useful in reconstructing coronary luminal shape.

Journal ArticleDOI
TL;DR: The equations of image formation in standard tomography, conventional radiography, and tomographic filtering are derived and compared and the mathematical and physical foundations of the method are developed.
Abstract: Conventional radiographs do not provide information about the depths of details and structures because they are two-dimensional projections of three-dimensional bodies. Taking advantage of the finite size of the X-ray source and the divergent nature of the X-ray beam, a radiograph can be processed by two-dimensional digital filtering techniques, so that the image of a particular layer is improved, while the others are degraded. This technique is referred to as a tomographic filtration process (TFP). This paper develops the mathematical and physical foundations of the method. Based on a model of the radiologic process, which is described in the paper, the equations of image formation in standard tomography, conventional radiography, and tomographic filtering are derived and compared.

Journal ArticleDOI
TL;DR: Evaluations of the performance of these filters show that the image quality cannot be as good as that of standard tomography or multiprojection reconstruction techniques; nevertheless they represent an improvement over conventional radiology, and highlight additional depth-dependent information contained in radiographs.
Abstract: A technique is proposed which allows the selective filtering of conventional radiographs in order to obtain depth-dependent information by utilizing the depth-dependent information contained therein. This technique, referred to as tomographic filtering or tomographic filtration process (TFP), takes advantage of the finite size of the X-ray source, so that after processing, the image of a particular layer is improved while the others are not. This paper starts with a brief review of technique and then concentrates on the design and implementation of digital tomographic filters. Examples are shown, including images of simulated radiographs processed with such filters. Evaluations of the performance of these filters show that the image quality cannot be as good as that of standard tomography or multiprojection reconstruction techniques; nevertheless they represent an improvement over conventional radiology, and highlight additional depth-dependent information contained in radiographs. This paper concludes with suggestions for further research in this area.

Journal ArticleDOI
TL;DR: In this article, the attenuated tomographic operator (ATO) was proposed for successive transverse plane reconstruction in single photon emission computerized tomography (SPECT) and a regularizing method was proposed to obtain a filtered, accurate solution for the tomographic images.
Abstract: The problem of successive transverse plane reconstruction in single photon emission computerized tomography (SPECT) is modeled in its more general form, which implies the definition of emission tomographic operators (ETO's) for which an analytical solution can be derived. The properties of the attenuated tomographic operator (ATO) are described and discussed, including the attenuation which is distributed on the reconstruction domain. For this particular operator, a regularizing method (RIM) is proposed, for which it is demonstrated and tested with simulation studies that a filtered, accurate solution can be extracted for the tomographic images as obtained using a single photon emission tomograph based on a rotating gamma camera in clinical use.

Journal ArticleDOI
TL;DR: The near term potential of each of the three areas of NMR development is considered, which includes the imaging and/or chemical shift imaging of these other species, notably, phosphorus-31, to gain an understanding of in vivo metabolic processes.
Abstract: CURRENT NMR imaging research can be considered in three parts. The first consists of the application ofNMR imaging to diagnostic medicine on a routine and cost-effective basis. The second involves the extension of the clinical capabilities ofNMR imaging into directions known to be of clinical value (i.e., if the technology works the utility is assured). Both of the above center on hydrogen imaging, which is many thousands of times more sensitive to the NMR process than any other endogenous species [1]. The third area ofNMR development involves the imaging and/or chemical shift imaging of these other species, notably, phosphorus-31, to gain an understanding of in vivo metabolic processes [2]. This is the area where most of the research funds are being spent, in the belief that significant payoffs must follow once the capabilities of access to in vivo metabolic information are fully understood. It is very difficult to understand the ultimate capabilities of these areas of NMR technology, since predicting tools such as modeling of the process have essentially been a failure. Our own work started in 1975 with the hope of generating a hydrogen imager capable of producing images with 3 X 3 mm resolution, 10 mm thick section, in 3 min /section. Today we routinely produce images of 1.7 X 1.7mm resolution of 6.5 mm section thickness at rates of25 or 50 s/section and 0.8 X 0.8 mm resolution in twice that time (Fig. 1) [3] , [4] . There is little question that the ultimate resolution of NMR imaging will be limited by patient motion during the study time, and will therefore be organ-dependent. Since patient exposure is currently not an issue, signal-to-noise (S/N) levels are dependent on patient patience and cost factors. With this as an introduction, we will consider the near term potential of each of the three areas presented above. It is assumed that the reader is familiar with the basic aspects of NMR [5], [6] and NMR imaging [1].