scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 1993"


Journal Article•DOI•
TL;DR: Algebraic reconstruction techniques (ART) are iterative procedures for recovering objects from their projections as discussed by the authors, which is claimed to produce high-quality reconstructions with excellent computational efficiency This is demonstrated by an example based on a particular (but realistic) medical imaging task, showing that ART can match the performance of the standard expectation-maximization approach for maximizing likelihood, but at an order of magnitude less computational cost.
Abstract: Algebraic reconstruction techniques (ART) are iterative procedures for recovering objects from their projections It is claimed that by a careful adjustment of the order in which the collected data are accessed during the reconstruction procedure and of the so-called relaxation parameters that are to be chosen in an algebraic reconstruction technique, ART can produce high-quality reconstructions with excellent computational efficiency This is demonstrated by an example based on a particular (but realistic) medical imaging task, showing that ART can match the performance of the standard expectation-maximization approach for maximizing likelihood (from the point of view of that particular medical task), but at an order of magnitude less computational cost >

509 citations


Journal Article•DOI•
TL;DR: A general cone-beam reconstruction algorithm that allows various scanning loci, handles reconstruction of rod-shaped specimens which are common in practice, and facilitates near real-time reconstruction by providing the same computational efficiency and parallelism as L.A. Feldkamp et al.'s (1984).
Abstract: Considering the characteristics of the X-ray microscope system being developed at SUNY at Buffalo and the limitations of available cone-beam reconstruction algorithms, a general cone-beam reconstruction algorithm and several special versions of it are proposed and validated by simulation. The cone-beam algorithm allows various scanning loci, handles reconstruction of rod-shaped specimens which are common in practice, and facilitates near real-time reconstruction by providing the same computational efficiency and parallelism as L.A. Feldkamp et al.'s (1984) algorithm. Although the present cone-beam algorithm is not exact, it consistently gives satisfactory reconstructed images. Furthermore, it has several nice properties if the scanning locus meets some conditions. First, reconstruction within a midplane is exact using a planar scanning locus. Second, the vertical integral of a reconstructed image is equal to that of the actual image. Third, reconstruction is exact if an actual image is independent of rotation axis coordinate z. Also, the general algorithm can uniformize and reduce z-axis artifacts, if a helix-like scanning locus is used. >

386 citations


Journal Article•DOI•
TL;DR: A new approach to the correction of intra-slice intensity variations is presented and results demonstrate that the correction process enhances the performance of backpropagation neural network classifiers designed for the segmentation of the images.
Abstract: A number of supervised and unsupervised pattern recognition techniques have been proposed in recent years for the segmentation and the quantitative analysis of MR images. However, the efficacy of these techniques is affected by acquisition artifacts such as inter-slice, intra-slice, and inter-patient intensity variations. Here a new approach to the correction of intra-slice intensity variations is presented. Results demonstrate that the correction process enhances the performance of backpropagation neural network classifiers designed for the segmentation of the images. Two slightly different versions of the method are presented. The first version fits an intensity correction surface directly to reference points selected by the user in the images. The second version fits the surface to reference points obtained by an intermediate classification operation. Qualitative and quantitative evaluation of both methods reveals that the first one leads to a better correction of the images than the second but that it is more sensitive to operator errors. >

342 citations


Journal Article•DOI•
TL;DR: The proposed approach uses a two-stage algorithm for spot detection and shape extraction that opens up the possibility of a reproducible segmentation of microcalcifications, which is a necessary precondition for an efficient screening program.
Abstract: A systematic method for the detection and segmentation of microcalcifications in mammograms is presented. It is important to preserve size and shape of the individual calcifications as exactly as possible. A reliable diagnosis requires both rates of false positives as well as false negatives to be extremely low. The proposed approach uses a two-stage algorithm for spot detection and shape extraction. The first stage applies a weighted difference of Gaussians filter for the noise-invariant and size-specific detection of spots. A morphological filter reproduces the shape of the spots. The results of both filters are combined with a conditional thickening operation. The topology and the number of the spots are determined with the first filter, and the shape by means of the second. The algorithm is tested with a series of real mammograms, using identical parameter values for all images. The results are compared with the judgement of radiological experts, and they are very encouraging. The described approach opens up the possibility of a reproducible segmentation of microcalcifications, which is a necessary precondition for an efficient screening program. >

240 citations


Journal Article•DOI•
TL;DR: The simulations show that the inclusion of position-dependent anatomical prior Information leads to further improvement relative to Bayesian reconstructions without the anatomical prior, and the algorithm exhibits a certain degree of robustness with respect to errors in the location of anatomical boundaries.
Abstract: Proposes a Bayesian method whereby maximum a posteriori (MAP) estimates of functional (PET and SPECT) images may be reconstructed with the aid of prior information derived from registered anatomical MR images of the same slice. The prior information consists of significant anatomical boundaries that are likely to correspond to discontinuities in an otherwise spatially smooth radionuclide distribution. The authors' algorithm, like others proposed recently, seeks smooth solutions with occasional discontinuities; the contribution here is the inclusion of a coupling term that influences the creation of discontinuities in the vicinity of the significant anatomical boundaries. Simulations on anatomically derived mathematical phantoms are presented. Although computationally intense in its current implication, the reconstructions are improved (ROI-RMS error) relative to filtered backprojection and EM-ML reconstructions. The simulations show that the inclusion of position-dependent anatomical prior Information leads to further improvement relative to Bayesian reconstructions without the anatomical prior. The algorithm exhibits a certain degree of robustness with respect to errors in the location of anatomical boundaries. >

238 citations


Journal Article•DOI•
TL;DR: This work presents an investigation of the potential of artificial neural networks for classification of registered magnetic resonance and X-ray computer tomography images of the human brain, and uses them to develop an adaptive learning scheme able to overcome interslice intensity variations typical of MR images.
Abstract: This work presents an investigation of the potential of artificial neural networks for classification of registered magnetic resonance and X-ray computer tomography images of the human brain. First, topological and learning parameters are established experimentally. Second, the learning and generalization properties of the neural networks are compared to those of a classical maximum likelihood classifier and the superiority of the neural network approach is demonstrated when small training sets are utilized. Third, the generalization properties of the neural networks are utilized to develop an adaptive learning scheme able to overcome interslice intensity variations typical of MR images. This approach permits the segmentation of image volumes based on training sets selected on a single slice. Finally, the segmentation results obtained both with the artificial neural network and the maximum likelihood classifiers are compared to contours drawn manually. >

232 citations


Journal Article•DOI•
Linda Kaufman1•
TL;DR: It is shown that the same scaled steepest descent algorithm can be applied to the least squares merit function, and that it can be accelerated using the conjugate gradient approach.
Abstract: The EM algorithm is the basic approach used to maximize the log likelihood objective function for the reconstruction problem in positron emission tomography (PET). The EM algorithm is a scaled steepest ascent algorithm that elegantly handles the nonnegativity constraints of the problem. It is shown that the same scaled steepest descent algorithm can be applied to the least squares merit function, and that it can be accelerated using the conjugate gradient approach. The experiments suggest that one can cut the computation by about a factor of 3 by using this technique. The results are applied to various penalized least squares functions which might be used to produce a smoother image. >

193 citations


Journal Article•DOI•
TL;DR: The presents a knowledge-based approach to automatic classification and tissue labeling of 2D magnetic resonance (MR) images of the human brain that provides an accurate complete labeling of all normal tissues in the absence of large amounts of data nonuniformity.
Abstract: Presents a knowledge-based approach to automatic classification and tissue labeling of 2D magnetic resonance (MR) images of the human brain. The system consists of 2 components: an unsupervised clustering algorithm and an expert system. MR brain data is initially segmented by the unsupervised algorithm, then the expert system locates a landmark tissue or cluster and analyzes it by matching it with a model or searching in it for an expected feature. The landmark tissue location and its analysis are repeated until a tumor is found or all tissues are labeled. The knowledge base contains information on cluster distribution in feature space and tissue models. Since tissue shapes are irregular, their models and matching are specially designed: 1) qualitative tissue models are defined for brain tissues such as white matter; 2) default reasoning is used to match a model with an MR image; that is, if there is no mismatch between a model and an image, they are taken as matched. The system has been tested with 53 slices of MR images acquired at different times by 2 different scanners. It accurately identifies abnormal slices and provides a partial labeling of the tissues. It provides an accurate complete labeling of all normal tissues in the absence of large amounts of data nonuniformity, as verified by radiologists. Thus the system can be used to provide automatic screening of slices for abnormality. It also provides a first step toward the complete description of abnormal images for use in automatic tumor volume determination. >

189 citations


Journal Article•DOI•
TL;DR: It is shown that both image space reconstruction algorithm and expectation maximization algorithm may be obtained from a common mathematical framework and this fact is used to extend ISRA for penalized likelihood estimates.
Abstract: The image space reconstruction algorithm (ISRA) was proposed as a modification of the expectation maximization (EM) algorithm based on physical considerations for application in volume emission computered tomography. As a consequence of this modification, ISRA searches for least squares solutions instead of maximizing Poisson likelihoods as the EM algorithm. It is shown that both algorithms may be obtained from a common mathematical framework. This fact is used to extend ISRA for penalized likelihood estimates. >

182 citations


Journal Article•DOI•
TL;DR: The problem of automatic quantification of brain tissue by utilizing single-valued (single echo) magnetic resonance imaging (MRI) brain scans is addressed and it is shown that this problem can be solved without classification or segmentation.
Abstract: The problem of automatic quantification of brain tissue by utilizing single-valued (single echo) magnetic resonance imaging (MRI) brain scans is addressed. It is shown that this problem can be solved without classification or segmentation, a method that may be particularly useful in quantifying white matter lesions where the range of values associated with the lesions and the white matter may heavily overlap. The general technique utilizes a statistical model of the noise and partial volume effect together with a finite mixture density description of the tissues. The quantification is then formulated as a minimization problem of high order with up to six separate densities as part of the mixture. This problem is solved by tree annealing with and without partial volume utilized, the results compared, and the sensitivity of the tree annealing algorithm to various parameters is exhibited. The actual quantification is performed by two methods: a classification-based method called Bayes quantification, and parameter estimation. Results from each method are presented for synthetic and actual data. >

179 citations


Journal Article•DOI•
TL;DR: A computer algorithm was developed for automated identification of 2-D vascular networks in X-ray angiograms by using an adaptive tracking algorithm in a three-stage recursive procedure to prevent the problem of tracking-path reentry in those areas where vessels overlap.
Abstract: A computer algorithm was developed for automated identification of 2-D vascular networks in X-ray angiograms. This was accomplished by using an adaptive tracking algorithm in a three-stage recursive procedure. First, given a starting position and direction, a segment in the vascular network was identified. Second, by filling it with the surrounding background pixel values, the detected segment was deleted from the angiogram. The detection-deletion scheme was employed to prevent the problem of tracking-path reentry in those areas where vessels overlap. Third, all branch points were detected by use of matched filtering along both edges of the vessel. The detected branch points were used as the starting points in the next recursion. The recursive procedure terminated when no new branch point was found. The algorithm showed a good performance when it was applied to angiograms of coronary and radial arteries. To provide a quantitative evaluation, vascular networks identified by the algorithm were compared to those identified by a human. The algorithm made some false-negative errors, but very few false-positive errors. >

Journal Article•DOI•
TL;DR: The results indicate that the non-Rayleigh statistics seem to be useful in characterizing and identifying malignant, benign, and normal tissue regions.
Abstract: A model for the scattering of ultrasound from breast tissue is proposed The model is based on the use of non-Rayleigh statistics, specifically the K distribution to describe the backscattered echo from the tissue A multiparameter test based on this model has been designed to characterize the tissue The data from the B-scan images of the breasts of 6 different patients were analyzed using this model The results indicate that the non-Rayleigh statistics seem to be useful in characterizing and identifying malignant, benign, and normal tissue regions >

Journal Article•DOI•
TL;DR: A new in vivo method to correct the nonlinear, object-shape-dependent and material-dependent spatial distortion in magnetic resonance (MR) images caused by magnetic susceptibility variations is presented.
Abstract: The authors present a new in vivo method to correct the nonlinear, object-shape-dependent and material-dependent spatial distortion in magnetic resonance (MR) images caused by magnetic susceptibility variations. This distortion across the air/tissue interface before and after the correction is quantified using a phantom. The results are compared to the distortion-free computed tomography (CT) images of the same phantom by fusing CT and MR images using fiducials, with a registration accuracy of better than a millimeter. The distortion at the bone/tissue boundary is negligible compared to the typical MRI (MR imaging) resolution of 1 mm, while that at the air/tissue boundary creates displacements of about 2 mm (for G/sub x/ 3.13 mT/m). This is a significant value if MRI is to provide highly accurate geometric measurements, as in the case of target localization for stereotaxic surgery. The correction scheme provides MR images with accuracy similar to that of CT: 1 mm. A new method to estimate the magnetic susceptibility of materials from MR images is presented. The magnetic susceptibility of cortical bone is measured using a SQUID magnetometer, and is found to be -8.86 ppm (with respect to air), which is quite similar to that of tissue (-9 ppm). >

Journal Article•DOI•
TL;DR: The classification of 3 common breast lesions, fibroadenomas, cysts, and cancers, was achieved using computerized image analysis of tumor shape in conjunction with patient age using a video camera and commercial frame grabber on a PC-based computer system.
Abstract: The classification of 3 common breast lesions, fibroadenomas, cysts, and cancers, was achieved using computerized image analysis of tumor shape in conjunction with patient age. The process involved the digitization of 69 mammographic images using a video camera and a commercial frame grabber on a PC-based computer system. An interactive segmentation procedure identified the tumor boundary using a thresholding technique which successfully segmented 57% of the lesions. Several features were chosen based on the gross and fine shape describing properties of the tumor boundaries as seen on the radiographs. Patient age was included as a significant feature in determining whether the tumor was a cyst, fibroadenoma, or cancer and was the only patient history information available for this study. The concept of a radial length measure provided a basis from which 6 of the 7 shape describing features were chosen, the seventh being tumor circularity. The feature selection process was accomplished using linear discriminant analysis and a Euclidean distance metric determined group membership. The effectiveness of the classification scheme was tested using both the apparent and the leaving-one-out test methods. The best results using the apparent test method resulted in correctly classifying 82% of the tumors segmented using the entire feature space and the highest classification rate using the leaving-one-out test method was 69% using a subset of the feature space. The results using only the shape descriptors, and excluding patient age resulted in correctly classifying 72% using the entire feature space (except age), and 51% using a subset of the feature space. >

Journal Article•DOI•
TL;DR: The proposed technique can enhance the role of statistical segmentation algorithms in body MRI data sets by reducing volumes of significant corruption through a large reduction in the inhomogeneity.
Abstract: The usefulness of statistical clustering algorithms developed for automatic segmentation of lesions and organs in magnetic resonance imaging (MRI) intensity data sets suffers from spatial nonstationarities introduced into the data sets by the acquisition instrumentation. The major intensity inhomogeneity in MRI is caused by variations in the B1-field of the radio frequency (RF) coil. A three-step method was developed to model and then reduce the effect. Using a least squares formulation, the inhomogeneity is modeled as a maximum variation order two polynomial. In the log domain the polynomial model is subtracted from the actual patient data set resulting in a compensated data set. The compensated data set is exponentiated and rescaled. Statistical comparisons indicate volumes of significant corruption undergo a large reduction in the inhomogeneity, whereas volumes of minimal corruption are not significantly changed. Acting as a preprocessor, the proposed technique can enhance the role of statistical segmentation algorithms in body MRI data sets. >

Journal Article•DOI•
TL;DR: To increase reconstruction speed, spatially invariant preconditioning filters that can be designed using the tomographic system response and implemented using 2-D frequency-domain filtering techniques have been applied.
Abstract: Because of the characteristics of the tomographic inversion problem, iterative reconstruction techniques often suffer from poor convergence rates-especially at high spatial frequencies. By using preconditioning methods, the convergence properties of most iterative methods can be greatly enhanced without changing their ultimate solution. To increase reconstruction speed, spatially invariant preconditioning filters that can be designed using the tomographic system response and implemented using 2-D frequency-domain filtering techniques have been applied. In a sample application, reconstructions from noiseless, simulated projection data, were performed using preconditioned and conventional steepest-descent algorithms. The preconditioned methods demonstrated residuals that were up to a factor of 30 lower than the assisted algorithms at the same iteration. Applications of these methods to regularized reconstructions from projection data containing Poisson noise showed similar, although not as dramatic, behavior. >

Journal Article•DOI•
TL;DR: The algorithm for segmentation and interpolation of the MRI data gives an isotropic binary representation of both gray and white matter which are available for 3-D surface rendering and the power and practicality of this method favorably compares to a manual one.
Abstract: The authors propose a method for the 3-D reconstruction of the brain from anisotropic magnetic resonance imaging (MRI) brain data. The method essentially consists in two original algorithms both for segmentation and for interpolation of the MRI data. The segmentation process is performed in three steps. A gray level thresholding of the white and gray matter tissue is performed on the brain MR raw data. A global white matter segmentation is automatically performed with a global 3-D connectivity algorithm which takes into account the anisotropy of the MRI voxel. The gray matter is segmented with a local 3-D connectivity algorithm. Mathematical morphology tools are used to interpolate slices. The whole process gives an isotropic binary representation of both gray and white matter which are available for 3-D surface rendering. The power and practicality of this method have been tested on four brain datasets. The segmentation algorithm favorably compares to a manual one. The interpolation algorithm was compared to the shaped-based method both quantitatively and qualitatively. >

Journal Article•DOI•
TL;DR: An approximation formula for the variance of positron emission tomography region-of-interest (ROI) values has been developed, implemented, and evaluated and showed to be accurate to within +/-10%.
Abstract: An approximation formula for the variance of positron emission tomography (PET) region-of-interest (ROI) values has been developed, implemented, and evaluated. This formula does not require access to the original projection data and is therefore convenient for routine use. The formula was derived by applying successive approximations to the filtered-backprojection reconstruction algorithm. ROI variance is estimated from the product of mean pixel variance within the region and a term accounting for the intercorrelation of all pixel pairs inside the region. The formula accounts for radioactivity distribution, attenuation, randoms, scatter, deadtime, detector normalization, scan length, decay, and reconstruction filter. The algorithm was tested by comparison to the exact ROI variance as calculated with Huesman's algorithm. Tests with scan data from phantoms, animals, and humans obtained on the Scanditronix PC2048-15B tomograph showed the approximation formula to be accurate to within +or-10%. >

Journal Article•DOI•
TL;DR: Stochastic temporal filtering techniques are proposed to enhance clinical fluoroscopy sequences corrupted by quantum mottle and the problem of displacement field estimation is treated in conjunction with the filtering stage to ensure that the temporal correlations are taken along the direction of motion to prevent object blur.
Abstract: Clinical angiography requires hundreds of X-ray images, putting the patients and particularly the medical staff at risk. Dosage reduction involves an inevitable sacrifice in image quality. In this work, the latter problem is addressed by first modeling the signal-dependent, Poisson-distributed noise that arises as a result of this dosage reduction. The commonly utilized noise model for single images is shown to be obtainable from the new model. Stochastic temporal filtering techniques are proposed to enhance clinical fluoroscopy sequences corrupted by quantum mottle. The temporal versions of these filters as developed here are more suitable for filtering image sequences, as correlations along the time axis can be utilized. For these dynamic sequences, the problem of displacement field estimation is treated in conjunction with the filtering stage to ensure that the temporal correlations are taken along the direction of motion to prevent object blur. >

Journal Article•DOI•
TL;DR: An efficient and robust image reconstruction algorithm for static impedance imaging using Hachtel's augmented matrix method was developed and it was shown that the parallel computation could reduce the computation time from hours to minutes.
Abstract: An efficient and robust image reconstruction algorithm for static impedance imaging using Hachtel's augmented matrix method was developed. This improved Newton-Raphson method produced more accurate images by reducing the undesirable effects of the ill-conditioned Hessian matrix. It is demonstrated that the electrical impedance tomography (EIT) system could produce two-dimensional static images from a physical phantom with 7% spatial resolution at the center and 5% at the periphery. Static EIT image reconstruction requires a large amount of computation. In order to overcome the limitations on reducing the computation time by algorithmic approaches, the improved Newton-Raphson algorithm was implemented on a parallel computer system. It is shown that the parallel computation could reduce the computation time from hours to minutes. >

Journal Article•DOI•
TL;DR: It is shown that a combination of the mathematical morphology operation, opening, with a linear rotating structuring element (ROSE) and dual feature thresholding can semi-automatically segment categories of vessels in a vascular network.
Abstract: A method for measuring the spatial concentration of specific categories of vessels in a vascular network consisting of vessels of several diameters, lengths, and orientations is demonstrated. It is shown that a combination of the mathematical morphology operation, opening, with a linear rotating structuring element (ROSE) and dual feature thresholding can semi-automatically segment categories of vessels in a vascular network. Capillaries and larger vessels (arterioles and venules) are segmented here in order to assess their spatial concentrations. The ROSE algorithm generates the initial segmentation, and dual feature thresholding provides a means of eliminating the nonedge artifact pixels. The subsequent gray-scale histogram of only the edge pixels yields the correct segmentation threshold value. This image processing strategy is demonstrated on micrographs of vascular casts. By adjusting the structuring element and rotation angles, it could be applied to other network structures where a segmentation by network component categories is advantageous, but where the objects can have any orientation. >

Journal Article•DOI•
TL;DR: Hand-bone analysis with image processing techniques using a digital radiograph can be used to assess skeletal age using a standard thresholding technique and a dynamic thresholding method with variable window sizes to differentiate between the bones and the soft tissue.
Abstract: Hand-bone analysis with image processing techniques using a digital radiograph can be used to assess skeletal age. The analysis consists of two steps: phalangeal and carpal bone analysis. The carpal bone analysis is discussed. First, the carpal bone region of interest (CROI) is defined using a standard thresholding technique to separate the hand from the background. Then, a dynamic thresholding method with variable window sizes is used to differentiate between the bones and the soft tissue. Next, the radius, ulna, and metacarpals intersecting the borders of the CROI are removed by using mathematical morphology. Finally, all objects included in the corrected CROI are separated and described in terms of features. These features describe the size, shape, and location and include some gray-scale pixel value information. On the basis of this analysis, the separation of the noncarpal bone objects from the carpal bone is possible. The feature selection step removes features of low discriminant power and reduces the space dimension. The remaining carpal bone parameters are used for further analysis leading to skeletal age assessment. >

Journal Article•DOI•
TL;DR: Two methods for shape-based interpolation that offer an improvement to linear interpolation are presented and tests with 3-D images of the coronary arterial tree demonstrate the efficacy of the methods.
Abstract: Many three-dimensional (3-D) medical images have lower resolution in the z direction than in the x or y directions. Before extracting and displaying objects in such images, an interpolated 3-D gray-scale image is usually generated via a technique such as linear interpolation to fill in the missing slices. Unfortunately, when objects are extracted and displayed from the interpolated image, they often exhibit a blocky and generally unsatisfactory appearance, a problem that is particularly acute for thin treelike structures such as the coronary arteries. Two methods for shape-based interpolation that offer an improvement to linear interpolation are presented. In shape-based interpolation, the object of interest is first segmented (extracted) from the initial 3-D image to produce a low-z-resolution binary-valued image, and the segmented image is interpolated to produce a high-resolution binary-valued 3-D image. The first method incorporates geometrical constraints and takes as input a segmented version of the original 3-D image. The second method builds on the first in that it also uses the original gray-scale image as a second input. Tests with 3-D images of the coronary arterial tree demonstrate the efficacy of the methods. >

Journal Article•DOI•
TL;DR: A quantitative method of skin healing assessment using true color image processing is presented that provides a new quantitative global assessment of healing kinetics and is noninvasive and well suited for multicentric clinical trials.
Abstract: A quantitative method of skin healing assessment using true color image processing is presented. The method was developed during a clinical trial using healthy volunteers, the goal of which was to study a drug for accelerating healing. Photographic images of the skin were sequentially acquired between day 1 and day 12 after pure painless epidermal wounds. The images were digitized in controlled conditions using a color video camera connected to a computer system. A color threshold based segmentation was developed to provide an operator-independent delineation of the wound. Two healing indexes were built measuring, the wound area and the wound color. The method was implemented in a software system allowing a fully automated determination of the healing indexes. The method provides a new quantitative global assessment of healing kinetics. It is noninvasive and well suited for multicentric clinical trials. >

Journal Article•DOI•
TL;DR: Two methodologies for fitting radiotracer models on a pixel-wise basis to PET data are considered and the results obtained by mixture analysis are found to have substantially improved mean square error performance characteristics.
Abstract: Two methodologies for fitting radiotracer models on a pixel-wise basis to PET data are considered. The first method does parameter optimization for each pixel considered as a separate region of interest. The second method also does pixel-wise analysis but incorporates an additive mixture representation to account for heterogeneity effects induced by instrumental and biological blurring. Several numerical and statistical techniques including cluster analysis, constrained nonlinear optimization, subsampling, and spatial filtering are used to implement the methods. A computer simulation experiment, modeling a standard F-18 deoxyglucose (FDG) imaging protocol using the UW-PET scanner, is conducted to evaluate the statistical performance of the parametric images obtained by the two methods. The results obtained by mixture analysis are found to have substantially improved mean square error performance characteristics. The total computation time for mixture analysis is on the order of 0.7 s/pixel on a 16 MIPS workstation. This results in a total computation time of about 1 h per slice for a typical FDG brain study. >

Journal Article•DOI•
TL;DR: Experimental results suggest that the MTS approach converges faster and produces better segmentation results than the single-level approach.
Abstract: A multiresolution texture segmentation (MTS) approach to image segmentation that addresses the issues of texture characterization, image resolution, and time to complete the segmentation is presented. The approach generalizes the conventional simulated annealing method to a multiresolution framework and minimizes an energy function that is dependent on the resolution of the size of the texture blocks in an image. A rigorous experimental procedure is also proposed to demonstrate the advantages of the proposed MTS approach on the accuracy of the segmentation, the efficiency of the algorithm, and the use of varying features at different resolution. Semireal images, created by sampling a series of diagnostic ultrasound images of an ovary in vitro, were tested to produce statistical measures on the performance of the approach. The ultrasound images themselves were then segmented to determine if the approach can achieve accurate results for the intended ultrasound application. Experimental results suggest that the MTS approach converges faster and produces better segmentation results than the single-level approach. >

Journal Article•DOI•
TL;DR: A new alignment method based on disparity analysis is presented, which can overcome many of the difficulties encountered by previous methods in 3D reconstruction of coronal sections.
Abstract: Quantitative autoradiography is a powerful radioisotopic-imaging method for neuroscientists to study local cerebral blood flow and glucose-metabolic rate at rest, in response to physiologic activation of the visual, auditory, somatosensory, and motor systems, and in pathologic conditions. Most autoradiographic studies analyze glucose utilization and blood flow in two-dimensional (2D) coronal sections. With modern digital computer and image-processing techniques, a large number of closely spaced coronal sections can be stacked appropriately to form a three-dimensional (3D) image. 3D autoradiography allows investigators to observe cerebral sections and surfaces from any viewing angle. A fundamental problem in 3D reconstruction is the alignment (registration) of the coronal sections. A new alignment method based on disparity analysis is presented, which can overcome many of the difficulties encountered by previous methods. The disparity analysis method can deal with asymmetric, damaged, or tilted coronal sections under the same general framework, and it can be used to match coronal sections of different sizes and shapes. Experimental results on alignment and 3D reconstruction are presented. >

Journal Article•DOI•
TL;DR: The results show that these methods can take full advantage of the contribution from the fine temporal sampling data of modern tomographs, and thus provide statistically reliable estimates that are comparable to those obtained from nonlinear LS regression.
Abstract: With the advent of positron emission tomography (PET), a variety of techniques have been developed to measure local cerebral blood flow (LCBF) noninvasively in humans. A potential class of techniques, which includes linear least squares (LS), linear weighted least squares (WLS), linear generalized least squares (GLS), and linear generalized weighted least squares (GWLS), is proposed. The statistical characteristics of these methods are examined by computer simulation. The authors present a comparison of these four methods with two other rapid estimation techniques developed by Huang et al. (1982) and Alpert (1984), and two classical methods, the unweighted and weighted nonlinear least squares regression. The results show that these methods can take full advantage of the contribution from the fine temporal sampling data of modern tomographs, and thus provide statistically reliable estimates that are comparable to those obtained from nonlinear LS regression. These methods also have high computational efficiency, and the parameters can be estimated directly from operational equations in one single step. Therefore, they can potentially be used in image-wide estimation of local cerebral blood flow and distribution volume with PET. >

Journal Article•DOI•
TL;DR: A new three-dimensional image reconstruction method which utilizes a three- dimensional convolution process with an inverse filter function which is derived analytically by the point spread function of the projection and backprojection geometry is proposed.
Abstract: Conventional X-ray tomosynthesis with film can provide a sagittal slice image with a single scan. This technique has the advantage of enabling reconstruction of a sagittal slice which is difficult to obtain from the X-ray CT system. However, only an image on the focal plane is obtained by a single scan. Furthermore, the image is degraded by superimpositions of the structures outside of the focal plane. A new three-dimensional image reconstruction method is proposed. This method utilizes a three-dimensional convolution process with an inverse filter function which is derived analytically by the point spread function of the projection and backprojection geometry. A digital tomosynthesis system has also been constructed for the purpose of evaluating the proposed method. This system was used in phantom experiments and clinical evaluations, and it was confirmed that the proposed method was able to reconstruct a better three-dimensional image with less artifacts from outside of the focused slice. >

Journal Article•DOI•
TL;DR: A reconstruction algorithm based on the use of optimal experiments is derived and it is shown to be many times faster than standard Newton-based reconstruction algorithms, and results from synthetic data indicate that the images that it produces are comparable.
Abstract: Electrical impedance tomography (EIT) is a noninvasive imaging technique which aims to image the impedance within a body from electrical measurements made on the surface. The reconstruction of impedance images is a ill-posed problem which is both extremely sensitive to noise and highly computationally intensive. The authors define an experimental measurement in EIT and calculate optimal experiments which maximize the distinguishability between the region to be imaged and a best-estimate conductivity distribution. These optimal experiments can be derived from measurements made on the boundary. The analysis clarifies the properties of different voltage measurement schemes. A reconstruction algorithm based on the use of optimal experiments is derived. It is shown to be many times faster than standard Newton-based reconstruction algorithms, and results from synthetic data indicate that the images that it produces are comparable. >