scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 1988"


Journal ArticleDOI
TL;DR: Results of these experiments show that for this particular diagnostic task, there was no significant difference in the ability of the two methods to depict luminance contrast; thus, further evaluation of AHE using controlled clinical trials is indicated.
Abstract: Adaptive histogram equalization (AHE) and intensity windowing have been compared using psychophysical observer studies Experienced radiologists were shown clinical CT (computerized tomographic) images of the chest Into some of the images, appropriate artificial lesions were introduced; the physicians were then shown the images processed with both AHE and intensity windowing They were asked to assess the probability that a given image contained the artificial lesion, and their accuracy was measured The results of these experiments show that for this particular diagnostic task, there was no significant difference in the ability of the two methods to depict luminance contrast; thus, further evaluation of AHE using controlled clinical trials is indicated >

347 citations


Journal ArticleDOI
E. Maeland1
TL;DR: A study of different cubic interpolation kernels in the frequency domain is presented that reveals novel aspects of both cubic spline and cubic convolution interpolation.
Abstract: A study of different cubic interpolation kernels in the frequency domain is presented that reveals novel aspects of both cubic spline and cubic convolution interpolation. The kernel used in cubic convolution is of finite support and depends on a parameter to be chosen at will. At the Nyquist frequency, the spectrum attains a value that is independent of this parameter. Exactly the same value is found at the Nyquist frequency in the cubic spline interpolation. If a strictly positive interpolation kernel is of importance in applications, cubic convolution with the parameter value zero is recommended. >

267 citations


Journal ArticleDOI
TL;DR: It is shown that noise in the materialdensity images is negatively correlated and that this can be exploited for noise reduction in the two-basis material density images, and locally adaptive algorithms are presented.
Abstract: Dual-energy material density images obtained by prereconstruction-basis material decomposition techniques offer specific tissue information, but they exhibit relatively high pixel noise. It is shown that noise in the material density images is negatively correlated and that this can be exploited for noise reduction in the two-basis material density images. The algorithm minimizes noise-related differences between pixels and their local mean values, with the constraint that monoenergetic CT values, which can be calculated from the density images, remain unchanged. Applied to the material density images, a noise reduction by factors of 2 to 5 is achieved. While quantitative results for regions of interest remain unchanged, edge effects can occur in the processed images. To suppress these, locally adaptive algorithms are presented and discussed. Results are documented by both phantom measurements and clinical examples. >

224 citations


Journal ArticleDOI
TL;DR: It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered and is presented in terms of entropy.
Abstract: The performance of several reversible, intraframe compression methods is compared by applying them to angiographic and magnetic resonance (MR) images. Reversible data compression involves two consecutive steps: decorrelation and coding. The result of the decorrelation step is presented in terms of entropy. Because Huffman coding generally approximates these entropy measures within a few percent, coding has not been investigated separately. It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered. The compression ratio is around 3 for angiographic images of 8-9 b/pixel, but is considerably less for MR images whose noise level is substantially higher. >

223 citations


Journal ArticleDOI
TL;DR: A novel algorithm that first detects spatially significant features based on the measurement of image intensity variations and uses high-level knowledge about the heart wall to label the detected features for noise rejection and to fill in the missing points by interpolation.
Abstract: Cardiac function is evaluated using echocardiographic analysis of shape attributes, such as the heart wall thickness or the shape change of the heart wall boundaries. This requires that the complete boundaries of the heart wall be detected from a sequence of two-dimensional ultrasonic images of the heart. The image segmentation process is made difficult since these images are plagued by poor intensity contrast and dropouts caused by the intrinsic limitations of the image formation process. Current studies often require having trained operators manually trace the heart walls. A review of previous work is presented, along with how this problem can be viewed in the context of the computer vision area. A novel algorithm is presented for detecting the boundaries. This algorithm first detects spatially significant features based on the measurement of image intensity variations. Since the detection step suffers from false alarms and missing boundary points, further processing uses high-level knowledge about the heart wall to label the detected features for noise rejection and to fill in the missing points by interpolation. >

150 citations


Journal ArticleDOI
TL;DR: A method for estimating the three-dimensional skeletons and transverse areas of the lumens of coronary arteries from digital X-ray angiograms is described and includes an automatic artery tracking procedure that applies an adaptive window to the densitometric profile data that are used in the parameter estimation.
Abstract: A method for estimating the three-dimensional (3D) skeletons and transverse areas of the lumens of coronary arteries from digital X-ray angiograms is described. The method is based on the use of a 3D generalized cylinder (GC) consisting of a series of 3D elliptical disks transverse to and centered on a 3D skeleton (medial axis) of the coronary arteries. The estimates of the transverse areas are based on a nonlinear least-squares-error estimation technique described by D.W. Marquardt (1963). This method exploits densitometric profiles, boundary estimates, and the orientation of the arterial skeleton in 3-space and includes an automatic artery tracking procedure. It applies an adaptive window to the densitometric profile data that are used in the parameter estimation. Preliminary experimental tests of the procedure on angiograms of in vivo human coronaries and on synthetic images yield encouraging results. >

144 citations


Journal ArticleDOI
TL;DR: An interpolation method is proposed for generating the intermediate contours between a start contour and a goal contour, which provides a powerful tool for reconstructing the 3D object from serial cross sections.
Abstract: An interpolation method is proposed for generating the intermediate contours between a start contour and a goal contour. Coupled with the display method for voxel-based objects, it provides a powerful tool for reconstructing the 3D object from serial cross sections. The method tries to fill in the lost information between two slices, assuming that there is smooth change between them. This is a reasonable assumption provided that the sampling is at least twice the Nyquist rate, in which case the result of the interpolation is expected to be very close to reality. One of the major advantages of this approach is its ability to handle the branching problem. Another major advantage is that after each intermediate contour is generated and sent to display device, there is no need to keep it in the memory unless the solid model will be used for further processing. Thus, the space complexity of this algorithm is relatively low. >

115 citations


Journal ArticleDOI
TL;DR: The circular harmonic transform (CHT) solution of the exponential Randon transform (ERT) is applied to single-photon emission computed tomography (SPECT) for uniform attenuation within a convex boundary to demonstrate that the boundary conditions are a more general property of the Radon transform and a not a property unique to rectangular coordinates.
Abstract: The circular harmonic transform (CHT) solution of the exponential Randon transform (ERT) is applied to single-photon emission computed tomography (SPECT) for uniform attenuation within a convex boundary. An important special case also considered is the linear (unattenuated) Radon transform (LRT). The solution is on the form of an orthogonal function expansion matched to projections that are in parallel-ray geometry. This property allows for efficient and accurate processing of the projections with fast Fourier transform (FFT) without interpolation or beam matching. The algorithm is optimized by the use of boundary conditions on the 2-D Fourier transform of the sinogram. These boundary conditions imply that the signal energy of the sinogram is concentrated in well-defined sectors in transform space. The angle defining the sectors depends in a direct way on the radius of the field view. These results are also obtained for fan-beam geometry and the linear Radon transform (the Fourier-Chebyshev transform of the sinogram) to demonstrate that the boundary conditions are a more general property of the Radon transform and a not a property unique to rectangular coordinates. >

107 citations


Journal ArticleDOI
TL;DR: It is shown that a special set of convex projections duplicates the result of the algebraic reconstruction technique (ART), and use of a priori information enhances the quality of the results, especially when partial data have been used, in which case ART fails.
Abstract: The method of convex projections is applied to reconstruct an image in computer tomography. This appears to the first time that the method has been used to obtain geometry-free reconstruction from ray-sum data. It is shown that a special set of convex projections duplicates the result of the algebraic reconstruction technique (ART). The similarities and differences of these two methods are discussed. It is pointed out that use of a priori information enhances the quality of the results, especially when partial data have been used, in which case ART fails. Simulations and reconstruction of a CT image are also furnished to demonstrate the feasibility of this method. >

105 citations


Journal ArticleDOI
TL;DR: A method is introduced to compensate for missing projection data that can result from gas between detectors or from malfunctioning detectors, thus completing the data set so that the filtered backprojection algorithm can be used to reconstruct artifact-free images.
Abstract: A method is introduced to compensate for missing projection data that can result from gas between detectors or from malfunctioning detectors. This method uses constraints in the Fourier domain to estimate the missing data, thus completing the data set so that the filtered backprojection algorithm can be used to reconstruct artifact-free images. The image reconstructed from estimates using this technique and a data set with gaps is nearly indistinguishable from an image reconstructed from a complete data set without gaps, using a simulated brain phantom. >

104 citations


Journal ArticleDOI
TL;DR: The authors describe a novel algorithm, known as sequential edge linking (SEL), for the automatic definition of coronary arterial edges in cineangiograms, based on sequential tree searching of possible coronary artery boundary locations.
Abstract: The authors describe a novel algorithm, known as sequential edge linking (SEL), for the automatic definition of coronary arterial edges in cineangiograms. This algorithm is based on sequential tree searching of possible coronary artery boundary locations. Using a coronary artery phantom, the authors compared the results obtained using SEL with hand-traced boundaries. Using a magnification of 2*, the results are generally good, with the average error being 1.7% of the diameter. Actual coronary artery images were also processed and a similar comparison indicated that total areas were comparable but the hand-drawn stenoses were, on average, 7% greater than the unobstructed diameter. Based on these data it is concluded that the SEL algorithm is an accurate method for fully automatic definition of coronary artery dimensions. >

Journal ArticleDOI
TL;DR: A general reconstruction algorithm for magnetic resonance imaging (MRI) with gradients having arbitrary time dependence is presented and an explicit representation of the point spread function (PSF) in the weighted correlation method is given.
Abstract: A general reconstruction algorithm for magnetic resonance imaging (MRI) with gradients having arbitrary time dependence is presented. This method estimates spin density by calculating the weighted correlation of the observed free induction decay signal and the phase modulation function at each point. A theorem which states that this method can be derived from the conditions of linearity and shift invariance is presented. Since these conditions are general, most of the MRI reconstruction algorithms proposed so far are equivalent to the weighted correlation method. An explicit representation of the point spread function (PSF) in the weighted correlation method is given. By using this representation, a method to control the PSF and the static field inhomogeneity effects is studied. A correction method for the inhomogeneity is proposed, and a limitation is clarified. Some simulation results are presented. >

Journal ArticleDOI
TL;DR: The largest single factor in improving crystal identification and spatial resolution was energy discrimination, and six conditions of detector identification were tested.
Abstract: A two-dimensional array detector system, consisting of 4*8 matrices of bismuth germanate (BGO) crystals coupled to four photomultipliers, was evaluated. Coincidence timing resolution of a pair of array detectors was 6.1 ns FWHM and 11.3 ns FWTM. Energy resolution per individual crystal ranged from 16.8 to 24.1% FWHM at 511 keV for an amplitude variation of a factor of 2.8. The signals from the photomultipliers were summed pairwise, the pairs in each dimension subtracted, and the differences divided by the amplitude of the signal summed from all four photomultipliers. The resulting two-pulse-height spectra contained peaks corresponding to the row or column containing the detector of interaction. Six conditions of detector identification were tested. The correct identification of detector ranged from 76% to 87% of the events. The largest single factor in improving crystal identification and spatial resolution was energy discrimination. >

Journal ArticleDOI
M. Fuderer1
TL;DR: The theoretical information content, defined by C.E. Shannon (1948), is proposed as an objective measure of MR (magnetic resonance) image quality, which takes into account the contrast-to-noise ratio (CNR), scan resolution, and field of view.
Abstract: The theoretical information content, defined by C.E. Shannon (1948), is proposed as an objective measure of MR (magnetic resonance) image quality. This measure takes into account the contrast-to-noise ratio (CNR), scan resolution, and field of view. It is used to derive an optimum in the tradeoff problem between image resolution and CNR, and as a criterion to assess the usefulness of high-resolution (512/sup 2/) MR images. The result tells that for a given total acquisition time, an optimum value of the resolution can be found. This optimum is very broad. To apply Shannon's theory on information constant to MR images, a model for the spatial spectral power density of these images is required. Such a model has been derived from experimental observations of ordinary MR images, as well as from theoretical considerations. >

Journal ArticleDOI
TL;DR: A step-response method has been developed to extract the properties (amplitudes and decay time constants) of intrinsic-eddy-current-sourced magnetic fields generated in whole-body magnetic resonance imaging systems when pulsed field gradients are applied.
Abstract: A step-response method has been developed to extract the properties (amplitudes and decay time constants) of intrinsic-eddy-current-sourced magnetic fields generated in whole-body magnetic resonance imaging systems when pulsed field gradients are applied. Exact compensation for the eddy-current effect is achieved through a polynomial rooting procedure and matrix inversion once the 2N properties of the N-term decay process are known. The output of the inversion procedure yields the required characteristics of the filter for spectrum magnitude and phase equalization. The method is described for the general case along with experimental results for one-, two-, and three-term inversions. The method's usefulness is demonstrated for the usually difficult case of long-term (200-1000-ms) eddy-current compensation. Field-gradient spectral flatness measurements over 30 mHz-100 Hz are given to validate the method. >

Journal ArticleDOI
TL;DR: It is demonstrated that the performance of the algorithm is superior to that of the filter back projection method in computational speed on realistic size problems and is equivalent to filtered backprojection in accuracy of reconstruction.
Abstract: The notion of a linogram corresponds to the notion of a sinogram in the conventional representation of projection data for image reconstruction. In the sinogram, points which correspond to rays through a fixed point in the cross section to be reconstructed all fall on a sinusoidal curve. In the linogram, however, these points fall on a straight line. The implementation of a novel image reconstruction method using this property is discussed. The implementation is of order N/sup 2/ log N, where N is proportional to the number of pixels on a side of the reconstruction region. It is demonstrated that the performance of the algorithm is superior to that of the filter backprojection method in computational speed on realistic size problems and is equivalent to filtered backprojection in accuracy of reconstruction. >

Journal ArticleDOI
TL;DR: An algorithm is introduced that models lung airway structures and uses computer simulations of growth based on fractal concepts to generate structures that are in good agreement with actual morphometric data.
Abstract: The fractal dimension (D/sub F/) is one measure of the space-filling features of a self-similar structure. Additionally, since D/sub F/ varies with branching level, there may be potential critical locations that are functionality important. The authors introduce an algorithm that models lung airway structures and uses computer simulations of growth based on fractal concepts. Under these conditions, limits imposed by simple boundary constraints generate structures that are in good agreement with actual morphometric data. >

Journal ArticleDOI
TL;DR: SPRINT II is a stationary detector ring tomograph designed for brain imaging that combines two-dimensional sodium iodide camera modules that use maximum-likelihood position logic in a ring with a scintillator packing fraction of 96%.
Abstract: SPRINT II is a stationary detector ring tomograph designed for brain imaging. Eleven two-dimensional sodium iodide camera modules that use maximum-likelihood position logic are arranged in a 50-cm-diameter ring with a scintillator packing fraction of 96%. A 34-cm-diameter rotating lead aperture ring containing either 10 or 12 slits is used for in-plane collimation, while the z-axis collimator is constructed of parallel lead foil rings. The field of view is 22 cm in diameter by 12 cm long. Sensitivity is 10 count/s/ mu Ci for an on-axis /sup 99m/Tc point source and 8500 count/s/ mu Ci/cm/sup 3/ for 19.8-cm-diameter by 6.2-cm-long cylindrical source. Longitudinal resolution is 10 mm FWHM, and in-plane resolution varies from 8 mm FWHM on-axis to 5 mm FWHM at a radius of 9 cm. Performance results are presented. >

Journal ArticleDOI
TL;DR: An efficient knowledge-based multigrid reconstruction algorithm based on the ML approach is presented to overcome problems of the slow convergence rate, the large computation time, and the nonuniform correction efficiency of each iteration.
Abstract: The problem of reconstruction in positron emission tomography (PET) is basically estimating the number of photon pairs emitted from the source. Using the concept of the maximum-likelihood (ML) algorithm, the problem of reconstruction is reduced to determining an estimate of the emitter density that maximizes the probability of observing the actual detector count data over all possible emitter density distributions. A solution using this type of expectation maximization (EM) algorithm with a fixed grid size is severely handicapped by the slow convergence rate, the large computation time, and the nonuniform correction efficiency of each iteration, which makes the algorithm very sensitive to the image pattern. An efficient knowledge-based multigrid reconstruction algorithm based on the ML approach is presented to overcome these problems. >

Journal ArticleDOI
TL;DR: Both spectra and autocorrelations of two-dimensional ultrasound images of normal and abnormal livers were computed and the fast Hartley transform was used to transform image data.
Abstract: The fast Hartley transform (FHT) is used to transform two-dimensional image data. Because the Hartley transform is real-valued, it does not require complex operations. Both spectra and autocorrelations of two-dimensional ultrasound images of normal and abnormal livers were computed. >

Journal ArticleDOI
TL;DR: Results of applying these procedures to data obtained from a phantom containing cold cylinders and to data from a cold spot-resolution phantom are presented and are shown to be superior to the results of correcting for scatter by scatter-window substraction.
Abstract: Three procedures for the removal of Compton-scattered data in SPECT by constrained deconvolution are presented. The first is a deconvolution of a 2-D measured PSRF containing scatter from a single reconstructed transaxial image; the second is a deconvolution of a 2-D measured point-source response function (PSRF) from each frame of projection data prior to reconstruction; the third involves deconvolution of a 3-D measured PSRF from a stack of reconstructed slices. Results of applying these procedures to data obtained from a phantom containing cold cylinders and to data from a cold spot-resolution phantom are presented and are shown to be superior to the results of correcting for scatter by scatter-window substraction. Both 3-D deconvolution from reconstructed images and 2-D deconvolution from projection data show major improvements in image contrast, resolution, and quantitation. Improvements are especially marked for small (1.0-3.0 cm) cold sources. >

Journal ArticleDOI
TL;DR: The Sobel operator was found to be superior to the Roberts operator in edge enhancement and a theoretical explanation for the superior performance was developed based on the concept of analyzing the x and y Sobel masks as linear filters.
Abstract: Reference is made to the Sobel and Roberts gradient operators used to enhance image edges. Overall, the Sobel operator was found to be superior to the Roberts operator in edge enhancement. A theoretical explanation for the superior performance of the Sobel operator was developed based on the concept of analyzing the x and y Sobel masks as linear filters. By applying pill-box, Gaussian, or median filtering prior to applying a gradient operator, noise was reduced. The pill-box and Gaussian filters were more computationally efficient than the median filter with approximately equal effectiveness in noise reduction. >

Journal ArticleDOI
TL;DR: A triangular subtraction technique with a reduced number of weighting factors is proposed for the calculation of the strip integral and a unified path integral theory is proposed to bridge various models and to clarify the integral interaction when a line or a strip passes through a square pixel matrix.
Abstract: Calculated forward projection techniques have been used for various purposes in computerized tomography applications, and several models have been proposed to simulate the tomography projection process. Since the area weighted strip integral is one of the best models, methods to facilitate the computation of strip integrals would be very useful. In particular, a triangular subtraction technique with a reduced number of weighting factors is proposed for the calculation of the strip integral. A unified path integral theory is also proposed to bridge various models and to clarify the integral interaction when a line or a strip passes through a square pixel matrix. >

Journal ArticleDOI
TL;DR: A system is presented for digitization and automated comparison of photographic images of patients obtained at different times using a high-precision video camera.
Abstract: A system is presented for digitization and automated comparison of photographic images of patients obtained at different times using a high-precision video camera. The images can be acquired either directly or from slides. The two images to be compared are registered using a complex geometrical and gray-level registration model including six parameters (planar, translation, rotation, magnification, linear transformation of the gray levels). The values of the registration parameters are automatically calculated by maximizing an integer similarity measure selected for robustness. The optimization of this function with respect to the registration parameters is performed using an adaptive random search strategy. The analysis of the differences between the registered images can be carried out through visual inspection of the subtraction image in which artifacts due to remaining infrapixel shifts have been suppressed. >

Journal ArticleDOI
TL;DR: The HTR algorithm is outlined, and it is shown that its performance compares favorably to the popular convolution-backprojection algorithm.
Abstract: A relatively unexplored algorithm is developed for reconstructing a two-dimensional image from a finite set of its sampled projections. The algorithm, referred to as the Hankel-transform-reconstruction (HTR) algorithm, is polar-coordinate based. The algorithm expands the polar-form Fourier transform F(r, theta ) of an image into a Fourier series in theta ; calculates the appropriately ordered Hankel transform of the coefficients of this series, giving the coefficients for the Fourier series of the polar-form image f(p, phi ); resolves this series, giving a polar-form reconstruction; and interpolates this reconstruction to a rectilinear grid. The HTR algorithm is outlined, and it is shown that its performance compares favorably to the popular convolution-backprojection algorithm. >

Journal ArticleDOI
TL;DR: A Bayesian approach with maximum-entropy (ME) priors to reconstruct an object from either the Fourier domain data (the Fourier transform of diffracted field measurements) or directly from the original projection data in the case of X-ray tomography is proposed.
Abstract: The authors propose a Bayesian approach with maximum-entropy (ME) priors to reconstruct an object from either the Fourier domain data (the Fourier transform of diffracted field measurements) in the case of diffraction tomography, or directly from the original projection data in the case of X-ray tomography. The objective function obtained is composed of a quadratic term resulting from chi /sup 2/ statistics and an entropy term that is minimized using variational techniques and a conjugate-gradient iterative method. The computational cost and practical implementation of the algorithm are discussed. Some simulated results in X-ray and diffraction tomography are given to compare this method to the classical ones. >

Journal ArticleDOI
TL;DR: The problem of extracting point spread functions from detector aperture functions in high-resolution PET is addressed and it appears to be adequate to relate the imaging capabilities in every point of the camera reconstruction field to the geometric and physical characteristics of the detection system.
Abstract: The problem of extracting point spread functions from detector aperture functions in high-resolution PET is addressed. In the limit of very small size detectors relative to the ring dimensions, assumptions are made that lead to a fast and simple computation model yielding point spread functions with negligible errors due to the reconstruction algorithm. The methods allows one to assess accurately the intrinsic performance of a PET tomograph, and it appears to be adequate to relate the imaging capabilities in every point of the camera reconstruction field to the geometric and physical characteristics of the detection system. The method was developed as an investigation tool to help design the next generation of very-high-resolution PET tomographs. >

Journal ArticleDOI
TL;DR: Comparisons are made between different analytic attenuation compensation methods used in SPECT imaging and none of the methods is truly quantitative, even without the presence of scatter, but are potentially useful and improve quantitation.
Abstract: Comparisons are made between different analytic attenuation compensation methods used in SPECT imaging. The methods include a multiplicative technique and a single-iterative technique, both applied after filtered backprojection, and two different implementations of an attenuation-weighted filtered backprojection technique (A-W FBP). The methods are compared using simple phantoms of line sources and water-filled circular and elliptical cylinders. both simulated data (without scatter) and experimental data (with scatter) are reconstructed. None of the methods is truly quantitative, even without the presence of scatter, but are potentially useful and improve quantitation. All techniques provide good peak compensation for line sources in attenuating media. The weighted backprojection methods eliminate nearly all deformation due to nonisotropic attenuation for a line source centered in an ellipse. However, the measured noise is amplified by A-W FBP, unless the method is further modified to reduce the amplification. >

Journal ArticleDOI
TL;DR: A model for the scatter point-spread-function parameterized only by air gap suggests that small increases in air gap significantly attenuate the higher-frequency structure of the scatter distribution.
Abstract: Characterizing the scatter point-spread-function (PSF) yields a model for predicting the behavior of the scatter and a PSF for deconvolution techniques that correct for scatter. Assuming the X-ray scatter is isotropic, the authors present a model for the scatter point-spread-function parameterized only by air gap. This model suggests that small increases in air gap significantly attenuate the higher-frequency structure of the scatter distribution. To evaluate this model, the authors examined the behavior of the spatial frequency distributions of experimental scatter images as a function of air gap. Using film as the detector, they imaged a 20-cm uniformly thick water phantom with aperture diameters of 8, 12, 16, 20, and 24 mm at air gaps of 0-24 cm. >

Journal ArticleDOI
TL;DR: The Compton scattered attenuation and the Compton scattered background were both modeled and measured for point sources centered in scattering spheres up to 10 cm in diameter, and good agreement was obtained between simulations and measurements.
Abstract: Compton scattering of gamma rays within the image volume has been assessed for a large-aperture positron-emission-tomography imaging system. The Compton scattered attenuation and the Compton scattered background were both modeled and measured for point sources centered in scattering spheres up to 10 cm in diameter. Good agreement was obtained between simulations and measurements. The attenuation problem is independent of the detector system, but its correction is more difficult in a large-aperture system. The scattered coincidence background is large in this system (43% for a 10-cm-diameter scattering sphere), but the background overlap is reduced with 3D imaging. >