scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 2002"


Journal ArticleDOI
TL;DR: A novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic and the neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings.
Abstract: We present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the radio-frequency coils or to problems associated with the acquisition sequences. The result is a slowly varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm.

1,786 citations


Journal ArticleDOI
TL;DR: A new approach is presented for elastic registration of medical images, and is applied to magnetic resonance images of the brain, where it results in accurate superposition of image data from individuals with significant anatomical differences.
Abstract: A new approach is presented for elastic registration of medical images, and is applied to magnetic resonance images of the brain. Experimental results demonstrate very high accuracy in superposition of images from different subjects. There are two major novelties in the proposed algorithm. First, it uses an attribute vector, i.e., a set of geometric moment invariants (GMIs) that are defined on each voxel in an image and are calculated from the tissue maps, to reflect the underlying anatomy at different scales. The attribute vector, if rich enough, can distinguish between different parts of an image, which helps establish anatomical correspondences in the deformation procedure; it also helps reduce local minima, by reducing ambiguity in potential matches. This is a fundamental deviation of our method, referred to as the hierarchical attribute matching mechanism for elastic registration (HAMMER), from other volumetric deformation methods, which are typically based on maximizing image similarity. Second, in order to avoid being trapped by local minima, i.e., suboptimal poor matches, HAMMER uses a successive approximation of the energy function being optimized by lower dimensional smooth energy functions, which are constructed to have significantly fewer local minima. This is achieved by hierarchically selecting the driving features that have distinct attribute vectors, thus, drastically reducing ambiguity in finding correspondence. A number of experiments demonstrate that the proposed algorithm results in accurate superposition of image data from individuals with significant anatomical differences.

1,134 citations


Journal ArticleDOI
TL;DR: In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed, which has been tested on a small image data base and compared with the performance of a human grader.
Abstract: In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.

820 citations


Journal ArticleDOI
TL;DR: A fully automatic "pipeline" image analysis framework that enhances the ability to detect small treatment effects not readily detectable through conventional analysis techniques and holds widespread potential for applications in other neurological disorders, as well as for the study of neurobiology in general.
Abstract: The quantitative analysis of magnetic resonance imaging (MRI) data has become increasingly important in both research and clinical studies aiming at human brain development, function, and pathology. Inevitably, the role of quantitative image analysis in the evaluation of drug therapy will increase, driven in part by requirements imposed by regulatory agencies. However, the prohibitive length of time involved and the significant intra- and inter-rater variability of the measurements obtained from manual analysis of large MRI databases represent major obstacles to the wider application of quantitative MRI analysis. We have developed a fully automatic "pipeline" image analysis framework and have successfully applied it to a number of large-scale, multi-center studies (more than 1000 MRI scans). This pipeline system is based on robust image processing algorithms, executed in a parallel, distributed fashion. This paper describes the application of this system to the automatic quantification of multiple sclerosis lesion load in MRI, in the context of a phase III clinical trial. The pipeline results were evaluated through an extensive validation study, revealing that the obtained lesion measurements are statistically indistinguishable from those obtained by trained human observers. Given that intra- and inter-rater measurement variability is eliminated by automatic analysis, this system enhances the ability to detect small treatment effects not readily detectable through conventional analysis techniques. While useful for clinical trial analysis in multiple sclerosis, this system holds widespread potential for applications in other neurological disorders, as well as for the study of neurobiology in general.

759 citations


Journal ArticleDOI
TL;DR: A penalized-likelihood function for this polyenergetic model is formulated and an ordered-subsets iterative algorithm for estimating the unknown densities in each voxel is developed that monotonically decreases the cost function at each iteration when one subset is used.
Abstract: This paper describes a statistical image reconstruction method for X-ray computed tomography (CT) that is based on a physical model that accounts for the polyenergetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume that the object consists of a given number of nonoverlapping materials, such as soft tissue and bone. The attenuation coefficient of each voxel is the product of its unknown density and a known energy-dependent mass attenuation coefficient. We formulate a penalized-likelihood function for this polyenergetic model and develop an ordered-subsets iterative algorithm for estimating the unknown densities in each voxel. The algorithm monotonically decreases the cost function at each iteration when one subset is used. Applying this method to simulated X-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artifacts.

699 citations


Journal ArticleDOI
TL;DR: It is shown that dynamic-scale ridge traversal is insensitive to its initial parameter settings, operates with little additional computational overhead, tracks centerlines with subvoxel accuracy, passes branch points, and handles significant image noise.
Abstract: The extraction of the centerlines of tubular objects in two and three-dimensional images is a part of many clinical image analysis tasks. One common approach to tubular object centerline extraction is based on intensity ridge traversal. In this paper, we evaluate the effects of initialization, noise, and singularities on intensity ridge traversal and present multiscale heuristics and optimal-scale measures that minimize these effects. Monte Carlo experiments using simulated and clinical data are used to quantify how these "dynamic-scale" enhancements address clinical needs regarding speed, accuracy, and automation. In particular, we show that dynamic-scale ridge traversal is insensitive to its initial parameter settings, operates with little additional computational overhead, tracks centerlines with subvoxel accuracy, passes branch points, and handles significant image noise. We also illustrate the capabilities of the method for medical applications involving a variety of tubular structures in clinical data from different organs, patients, and imaging modalities.

660 citations


Journal ArticleDOI
TL;DR: An active shape model segmentation scheme is presented that is steered by optimal local features, contrary to normalized first order derivative profiles, as in the original formulation, using a nonlinear kNN-classifier to find optimal displacements for landmarks.
Abstract: An active shape model segmentation scheme is presented that is steered by optimal local features, contrary to normalized first order derivative profiles, as in the original formulation [Cootes and Taylor, 1995, 1999, and 2001]. A nonlinear kNN-classifier is used, instead of the linear Mahalanobis distance, to find optimal displacements for landmarks. For each of the landmarks that describe the shape, at each resolution level taken into account during the segmentation optimization procedure, a distinct set of optimal features is determined. The selection of features is automatic, using the training images and sequential feature forward and backward selection. The new approach is tested on synthetic data and in four medical segmentation tasks: segmenting the right and left lung fields in a database of 230 chest radiographs, and segmenting the cerebellum and corpus callosum in a database of 90 slices from MRI brain images. In all cases, the new method produces significantly better results in terms of an overlap error measure (p<0.001 using a paired T-test) than the original active shape model scheme.

592 citations


Journal ArticleDOI
TL;DR: The ability of SVM to outperform several well-known methods developed for the widely studied problem of MC detection suggests that SVM is a promising technique for object detection in a medical imaging application.
Abstract: We investigate an approach based on support vector machines (SVMs) for detection of microcalcification (MC) clusters in digital mammograms, and propose a successive enhancement learning scheme for improved performance. SVM is a machine-learning method, based on the principle of structural risk minimization, which performs well when applied to data outside the training set. We formulate MC detection as a supervised-learning problem and apply SVM to develop the detection algorithm. We use the SVM to detect at each location in the image whether an MC is present or not. We tested the proposed method using a database of 76 clinical mammograms containing 1120 MCs. We use free-response receiver operating characteristic curves to evaluate detection performance, and compare the proposed algorithm with several existing methods. In our experiments, the proposed SVM framework outperformed all the other methods tested. In particular, a sensitivity as high as 94% was achieved by the SVM method at an error rate of one false-positive cluster per image. The ability of SVM to outperform several well-known methods developed for the widely studied problem of MC detection suggests that SVM is a promising technique for object detection in a medical imaging application.

574 citations


Journal ArticleDOI
TL;DR: Reconstruction-based microwave-induced thermoacoustic tomography in a spherical configuration with exact reconstruction solution derived and approximated to a modified backprojection algorithm and demonstrated that the reconstructed images agree well with the original samples.
Abstract: Reconstruction-based microwave-induced thermoacoustic tomography in a spherical configuration is presented. Thermoacoustic waves from biological tissue samples excited by microwave pulses are measured by a wide-band unfocused ultrasonic transducer, which is set on a spherical surface enclosing the sample. Sufficient data are acquired from different directions to reconstruct the microwave absorption distribution. An exact reconstruction solution is derived and approximated to a modified backprojection algorithm. Experiments demonstrate that the reconstructed images agree well with the original samples. The spatial resolution of the system reaches 0.5 mm.

491 citations


Journal ArticleDOI
TL;DR: Methods for a geometrical and structural analysis of vessel systems have been evaluated in the clinical environment and have been used in more than 170 cases so far to plan interventions and transplantations.
Abstract: For liver surgical planning, the structure and morphology of the hepatic vessels and their relationship to tumors are of major interest. To achieve a fast and robust assistance with optimal quantitative and visual information, we present methods for a geometrical and structural analysis of vessel systems. Starting from the raw image data a sequence of image processing steps has to be carried out until a three-dimensional representation of the relevant anatomic and pathologic structures is generated. Based on computed tomography (CT) scans, the following steps are performed. 1) The volume data is preprocessed and the vessels are segmented. 2) The skeleton of the vessels is determined and transformed into a graph enabling a geometrical and structural shape analysis. Using this information the different intrahepatic vessel systems are identified automatically. 3) Based on the structural analysis of the branches of the portal vein, their vascular territories are approximated with different methods. These methods are compared and validated anatomically by means of corrosion casts of human livers. 4) Vessels are visualized with graphics primitives fitted to the skeleton to provide smooth visualizations without aliasing artifacts. The image analysis techniques have been evaluated in the clinical environment and have been used in more than 170 cases so far to plan interventions and transplantations.

470 citations


Journal ArticleDOI
TL;DR: The comprehensive design of a three-dimensional (3-D) active appearance model (AAM) is reported for the first time as an involved extension of the AAM framework introduced by Cootes et al.
Abstract: A model-based method for three-dimensional image segmentation was developed and its performance assessed in segmentation of volumetric cardiac magnetic resonance (MR) images and echocardiographic temporal image sequences. Comprehensive design of a three-dimensional (3-D) active appearance model (AAM) is reported for the first time as an involved extension of the AAM framework introduced by Cootes et al. The model's behavior is learned from manually traced segmentation examples during an automated training stage. Information about shape and image appearance of the cardiac structures is contained in a single model. This ensures a spatially and/or temporally consistent segmentation of three-dimensional cardiac images. The clinical potential of the 3-D AAM is demonstrated in short-axis cardiac MR images and four-chamber echocardiographic sequences. The method's performance was assessed by comparison with manually identified independent standards in 56 clinical MR and 64 clinical echo image sequences. The AAM method showed good agreement with the independent standard using quantitative indexes of border positioning errors, endo- and epicardial volumes, and left ventricular mass. In MR, the endocardial volumes, epicardial volumes, and left ventricular wall mass correlation coefficients between manual and AAM were R/sup 2/=0.94,0.97,0.82, respectively. For echocardiographic analysis, the area correlation was R/sup 2/=0.79. The AAM method shows high promise for successful application to MR and echocardiographic image analysis in a clinical setting.

Journal ArticleDOI
TL;DR: The current status of cardiac image registration methods is reviewed and it is suggested that automatic registration, based on computer programs, might, however, offer better accuracy and repeatability and save time.
Abstract: In this paper, the current status of cardiac image registration methods is reviewed. The combination of information from multiple cardiac image modalities, such as magnetic resonance imaging, computed tomography, positron emission tomography, single-photon emission computed tomography, and ultrasound, is of increasing interest in the medical community for physiologic understanding and diagnostic purposes. Registration of cardiac images is a more complex problem than brain image registration because the heart is a nonrigid moving organ inside a moving body. Moreover, as compared to the registration of brain images, the heart exhibits much fewer accurate anatomical landmarks. In a clinical context, physicians often mentally integrate image information from different modalities. Automatic registration, based on computer programs, might, however, offer better accuracy and repeatability and save time.

Journal ArticleDOI
TL;DR: A novel method is introduced for the generation of landmarks for three-dimensional (3-D) shapes and the construction of the corresponding 3-D statistical shape models that can treat multiple-part structures and requires less restrictive assumptions on the structure's topology.
Abstract: A novel method is introduced for the generation of landmarks for three-dimensional (3-D) shapes and the construction of the corresponding 3-D statistical shape models. Automatic landmarking of a set of manual segmentations from a class of shapes is achieved by 1) construction of an atlas of the class, 2) automatic extraction of the landmarks from the atlas, and 3) subsequent propagation of these landmarks to each example shape via a volumetric nonrigid registration technique using multiresolution B-spline deformations. This approach presents some advantages over previously published methods: it can treat multiple-part structures and requires less restrictive assumptions on the structure's topology. In this paper, we address the problem of building a 3-D statistical shape model of the left and right ventricle of the heart from 3-D magnetic resonance images. The average accuracy in landmark propagation is shown to be below 2.2 mm. This application demonstrates the robustness and accuracy of the method in the presence of large shape variability and multiple objects.

Journal ArticleDOI
TL;DR: This work reports an exact and fast Fourier-domain reconstruction algorithm for thermoacoustic tomography in a planar configuration assuming thermal confinement and constant acoustic speed and demonstrates that the blurring caused by the finite size of the detector surface is the primary limiting factor on the resolution.
Abstract: For pt. I see ibid., vol. 21, no. 7, p. 823-8 (2002). Microwave-induced thermoacoustic tomography (TAT) in a cylindrical configuration is developed to image biological tissue. Thermoacoustic signals are acquired by scanning a flat ultrasonic transducer. Using a new expansion of a spherical wave in cylindrical coordinates, we apply the Fourier and Hankel transforms to TAT and obtain an exact frequency-domain reconstruction method. The effect of discrete spatial sampling on image quality is analyzed. An aliasing-proof reconstruction method is proposed. Numerical and experimental results are included.

Journal ArticleDOI
TL;DR: A novel extension of active appearance models (AAMs) for automated border detection in echocardiographic image sequences is reported and the AAMM was significantly more accurate than an equivalent set of two-dimensional AAMs.
Abstract: A novel extension of active appearance models (AAMs) for automated border detection in echocardiographic image sequences is reported. The active appearance motion model (AAMM) technique allows fully automated robust and time-continuous delineation of left ventricular (LV) endocardial contours over the full heart cycle with good results. Nonlinear intensity normalization was developed and employed to accommodate ultrasound-specific intensity distributions. The method was trained and tested on 16-frame phase-normalized transthoracic four-chamber sequences of 129 unselected infarct patients, split randomly into a training set (n=65) and a test set (n=64). Borders were compared to expert drawn endocardial contours. On the test set, fully automated AAMM performed well in 97% of the cases (average distance between manual and automatic landmark points was 3.3 mm, comparable to human interobserver variabilities). The ultrasound-specific intensity normalization proved to be of great value for good results in echocardiograms. The AAMM was significantly more accurate than an equivalent set of two-dimensional AAMs.

Journal ArticleDOI
TL;DR: The method is based on parametric active contours and includes several adaptations that address important difficulties of cellular imaging, particularly the presence of low-contrast boundary deformations known as pseudopods and the occurence of multiple contacts between cells.
Abstract: This paper presents a segmentation and tracking method for quantitative analysis of cell dynamics from in vitro videomicroscopy data. The method is based on parametric active contours and includes several adaptations that address important difficulties of cellular imaging, particularly the presence of low-contrast boundary deformations known as pseudopods, and the occurence of multiple contacts between cells. First, we use an edge map based on the average intensity dispersion that takes advantage of relative background homogeneity to facilitate the detection of both pseudopods and interfaces between adjacent cells. Second, we introduce a repulsive interaction between contours that allows correct segmentation of objects in contact and overcomes the shortcomings of previously reported techniques to enforce contour separation. Our tracking technique was validated on a realistic data set by comparison with a manually defined ground-truth and was successfully applied to study the motility of amoebae in a biological research project.

Journal ArticleDOI
TL;DR: It is shown that the rigid-body motion of the heart is primarily in the craniocaudal direction with smaller displacements in the right-left and anterior-posterior directions; this is in agreement with previous studies.
Abstract: This paper describes a quantitative assessment of respiratory motion of the heart and the construction of a model of respiratory motion. Three-dimensional magnetic resonance scans were acquired on eight normal volunteers and ten patients. The volunteers were imaged at multiple positions in the breathing cycle between full exhalation and full inhalation while holding their breath. The exhalation volume was segmented and used as a template to which the other volumes were registered using an intensity-based rigid registration algorithm followed by nonrigid registration. The patients were imaged at inhale and exhale only. The registration results were validated by visual assessment and consistency measurements indicating subvoxel registration accuracy. For all subjects, we assessed the nonrigid motion of the heart at the right coronary artery, right atrium, and left ventricle. We show that the rigid-body motion of the heart is primarily in the craniocaudal direction with smaller displacements in the right-left and anterior-posterior directions; this is in agreement with previous studies. Deformation was greatest for the free wall of the right atrium and the left ventricle; typical deformations were 3-4 mm with deformations of up to 7 mm observed in some subjects. Using the registration results, landmarks on the template surface were mapped to their correct positions through the breathing cycle. Principal component analysis produced a statistical model of the motion and deformation of the heart. We discuss how this model could be used to assist motion correction.

Journal ArticleDOI
TL;DR: This paper presents an efficient and accurate, fully automatic 3-D segmentation procedure for brain MR scans that incorporates a fast and accurate way to find optimal segmentations, given the intensity models along with the spatial coherence assumption.
Abstract: Automatic three-dimensional (3-D) segmentation of the brain from magnetic resonance (MR) scans is a challenging problem that has received an enormous amount of attention lately Of the techniques reported in the literature, very few are fully automatic In this paper, we present an efficient and accurate, fully automatic 3-D segmentation procedure for brain MR scans It has several salient features; namely, the following 1) Instead of a single multiplicative bias field that affects all tissue intensities, separate parametric smooth models are used for the intensity of each class 2) A brain atlas is used in conjunction with a robust registration procedure to find a nonrigid transformation that maps the standard brain to the specimen to be segmented This transformation is then used to: segment the brain from nonbrain tissue; compute prior probabilities for each class at each voxel location and find an appropriate automatic initialization 3) Finally, a novel algorithm is presented which is a variant of the expectation-maximization procedure, that incorporates a fast and accurate way to find optimal segmentations, given the intensity models along with the spatial coherence assumption Experimental results with both synthetic and real data are included, as well as comparisons of the performance of our algorithm with that of other published methods

Journal ArticleDOI
TL;DR: A fully automatic method is presented to detect abnormalities in frontal chest radiographs which are aggregated into an overall abnormality score, aimed at finding abnormal signs of a diffuse textural nature, such as they are encountered in mass chest screening against tuberculosis (TB).
Abstract: A fully automatic method is presented to detect abnormalities in frontal chest radiographs which are aggregated into an overall abnormality score. The method is aimed at finding abnormal signs of a diffuse textural nature, such as they are encountered in mass chest screening against tuberculosis (TB). The scheme starts with automatic segmentation of the lung fields, using active shape models. The segmentation is used to subdivide the lung fields into overlapping regions of various sizes. Texture features are extracted from each region, using the moments of responses to a multiscale filter bank. Additional "difference features" are obtained by subtracting feature vectors from corresponding regions in the left and right lung fields. A separate training set is constructed for each region. All regions are classified by voting among the k nearest neighbors, with leave-one-out. Next, the classification results of each region are combined, using a weighted multiplier in which regions with higher classification reliability weigh more heavily. This produces an abnormality score for each image. The method is evaluated on two databases. The first database was collected from a TB mass chest screening program, from which 147 images with textural abnormalities and 241 normal images were selected. Although this database contains many subtle abnormalities, the classification has a sensitivity of 0.86 at a specificity of 0.50 and an area under the receiver operating characteristic (ROC) curve of 0.820. The second database consist of 100 normal images and 100 abnormal images with interstitial disease. For this database, the results were a sensitivity of 0.97 at a specificity of 0.90 and an area under the ROC curve of 0.986.

Journal ArticleDOI
TL;DR: This paper introduces a significant enhancement over existing gradient-based snakes in the form of a modified gradient vector flow that can track leukocytes rolling at high speeds that are not amenable to tracking with the existing edge-based techniques.
Abstract: Inflammatory disease is initiated by leukocytes (white blood cells) rolling along the inner surface lining of small blood vessels called postcapillary venules. Studying the number and velocity of rolling leukocytes is essential to understanding and successfully treating inflammatory diseases. Potential inhibitors of leukocyte recruitment can be screened by leukocyte rolling assays and successful inhibitors validated by intravital microscopy. In this paper, we present an active contour or snake-based technique to automatically track the movement of the leukocytes. The novelty of the proposed method lies in the energy functional that constrains the shape and size of the active contour. This paper introduces a significant enhancement over existing gradient-based snakes in the form of a modified gradient vector flow. Using the gradient vector flow, we can track leukocytes rolling at high speeds that are not amenable to tracking with the existing edge-based techniques. We also propose a new energy-based implicit sampling method of the points on the active contour that replaces the computationally expensive explicit method. To enhance the performance of this shape and size constrained snake model, we have coupled it with Kalman filter so that during coasting (when the leukocytes are completely occluded or obscured), the tracker may infer the location of the center of the leukocyte. Finally, we have compared the performance of the proposed snake tracker with that of the correlation and centroid-based trackers. The proposed snake tracker results in superior performance measures, such as reduced error in locating the leukocyte under tracking and improvements in the percentage of frames successfully tracked. For screening and drug validation, the tracker shows promise as an automated data collection tool.

Journal ArticleDOI
TL;DR: A generic methodology for estimating soft tissue deformation which integrates image-derived information with biomechanical models, and applies it to the problem of cardiac deformation estimation is presented, which provides quantitative regional 3-D estimates of heart deformation.
Abstract: The quantitative estimation of regional cardiac deformation from three-dimensional (3-D) image sequences has important clinical implications for the assessment of viability in the heart wall. We present here a generic methodology for estimating soft tissue deformation which integrates image-derived information with biomechanical models, and apply it to the problem of cardiac deformation estimation. The method is image modality independent. The images are segmented interactively and then initial correspondence is established using a shape-tracking approach. A dense motion field is then estimated using a transversely isotropic, linear-elastic model, which accounts for the muscle fiber directions in the left ventricle. The dense motion field is in turn used to calculate the deformation of the heart wall in terms of strain in cardiac specific directions. The strains obtained using this approach in open-chest dogs before and after coronary occlusion, exhibit a high correlation with strains produced in the same animals using implanted markers. Further, they show good agreement with previously published results in the literature. This proposed method provides quantitative regional 3-D estimates of heart deformation.

Journal ArticleDOI
TL;DR: A method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data using an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis is described.
Abstract: We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using /sup 11/C-raclopride.

Journal ArticleDOI
TL;DR: A computationally efficient fully 3-D MCS-based reconstruction architecture is developed by combining the following methods: a dual matrix ordered subset (DM-OS) reconstruction algorithm to accelerate the reconstruction and avoid massive transition matrix precalculation and storage.
Abstract: Quantitative accuracy of single photon emission computed tomography (SPECT) images is highly dependent on the photon scatter model used for image reconstruction. Monte Carlo simulation (MCS) is the most general method for detailed modeling of scatter, but to date, fully three-dimensional (3-D) MCS-based statistical SPECT reconstruction approaches have not been realized, due to prohibitively long computation times and excessive computer memory requirements. MCS-based reconstruction has previously been restricted to two-dimensional approaches that are vastly inferior to fully 3-D reconstruction. Instead of MCS, scatter calculations based on simplified but less accurate models are sometimes incorporated in fully 3-D SPECT reconstruction algorithms. We developed a computationally efficient fully 3-D MCS-based reconstruction architecture by combining the following methods: 1) a dual matrix ordered subset (DM-OS) reconstruction algorithm to accelerate the reconstruction and avoid massive transition matrix precalculation and storage; 2) a stochastic photon transport calculation in MCS is combined with an analytic detector modeling step to reduce noise in the Monte Carlo (MC)-based reprojection after only a small number of photon histories have been tracked; and 3) the number of photon histories simulated is reduced by an order of magnitude in early iterations, or photon histories calculated in an early iteration are reused. For a 64/spl times/64/spl times/64 image array, the reconstruction time required for ten DM-OS iterations is approximately 30 min on a dual processor (AMD 1.4 GHz) PC, in which case the stochastic nature of MCS modeling is found to have a negligible effect on noise in reconstructions. Since MCS can calculate photon transport for any clinically used photon energy and patient attenuation distribution, the proposed methodology is expected to be useful for obtaining highly accurate quantitative SPECT images within clinically acceptable computation times.

Journal ArticleDOI
TL;DR: It is found that enhancement based on the FWT suffers from one serious drawback-the introduction of visible artifacts when large structures are enhanced strongly, by contrast, the Laplacian Pyramid allows a smooth enhancement of large structures, such that visible artifacts can be avoided.
Abstract: Contrast enhancement of radiographies based on a multiscale decomposition of the images recently has proven to be a far more versatile and efficient method than regular unsharp-masking techniques, while containing these as a subset. In this paper, we compare the performance of two multiscale-methods, namely the Laplacian Pyramid and the fast wavelet transform (FWT). We find that enhancement based on the FWT suffers from one serious drawback-the introduction of visible artifacts when large structures are enhanced strongly. By contrast, the Laplacian Pyramid allows a smooth enhancement of large structures, such that visible artifacts can be avoided. Only for the enhancement of very small details, for denoising applications or compression of images, the FWT may have some advantages over the Laplacian Pyramid.

Journal ArticleDOI
TL;DR: A method for simultaneous estimation of video-intensity inhomogeneities and segmentation of US image tissue regions and how this multiplicative model can be related to the ultrasonic physics of image formation is explained to justify the approach.
Abstract: Displayed ultrasound (US) B-mode images often exhibit tissue intensity inhomogeneities dominated by nonuniform beam attenuation within the body. This is a major problem for intensity-based, automatic segmentation of video-intensity images because conventional threshold-based or intensity-statistic-based approaches do not work well in the presence of such image distortions. Time gain compensation (TGC) is typically used in standard US machines in an attempt to overcome this. However this compensation method is position-dependent which means that different tissues in the same TGC time-range (or corresponding depth range) will be, incorrectly, compensated by the same amount. Compensation should really be tissue-type dependent but automating this step is difficult. The main contribution of this paper is to develop a method for simultaneous estimation of video-intensity inhomogeneities and segmentation of US image tissue regions. The method uses a combination of the maximum a posteriori (MAP) and Markov random field (MRF) methods to estimate the US image distortion field assuming it follows a multiplicative model while at the same time labeling image regions based on the corrected intensity statistics. The MAP step is used to estimate the intensity model parameters while the MRF step provides a novel way of incorporating the distributions of image tissue classes as a spatial smoothness constraint. We explain how this multiplicative model can be related to the ultrasonic physics of image formation to justify our approach. Experiments are presented on synthetic images and a gelatin phantom to evaluate quantitatively the accuracy of the method. We also discuss qualitatively the application of the method to clinical breast and cardiac US images. Limitations of the method and potential clinical applications are outlined in the conclusion.

Journal ArticleDOI
TL;DR: The registration of ultrasound volumes based on the mutual information measure is investigated, a technique originally applied to multimodality registration of brain images, and should work well for a variety of applications examining serial anatomic and physiologic changes.
Abstract: We investigated the registration of ultrasound volumes based on the mutual information measure, a technique originally applied to multimodality registration of brain images. A prerequisite for successful registration is a smooth, quasi-convex mutual information surface with an unambiguous maximum. We discuss the necessary preprocessing to create such a surface for ultrasound volumes. Abdominal and thoracic organs imaged with ultrasound typically move relative to the exterior of the body and are deformable. Consequently, four specific instances of image registration involving progressively generalized transformations were studied: rigid-body, rigid-body + uniform scaling, rigid-body + nonuniform scaling, and affine. Registration was applied to clinically acquired volumetric images. The accuracy was comparable with the voxel dimension for all transformation modes, although it degraded as the transformation grew more complex. Likewise, the capture range became narrower with the complexity of transformation. As the use of real-time three-dimensional ultrasound becomes more prevalent, the method we present should work well for a variety of applications examining serial anatomic and physiologic changes. Developers of these clinical applications would match the deformation model of their problem to one of the four transformation models presented here.

Journal ArticleDOI
TL;DR: The basic design of the modified HMD, the method and results of an extensive laboratory study for photogrammetric calibration of the Varioscope's computer displays to a real-world scene, and the accuracy achieved is sufficient for a wide range of CAS applications are presented.
Abstract: Computer-aided surgery (CAS), the intraoperative application of biomedical visualization techniques, appears to be one of the most promising fields of application for augmented reality (AR), the display of additional computer-generated graphics over a real-world scene Typically a device such as a head-mounted display (HMD) is used for AR However, considerable technical problems connected with AR have limited the intraoperative application of HMDs up to now One of the difficulties in using HMDs is the requirement for a common optical focal plane for both the realworld scene and the computer-generated image, and acceptance of the HMD by the user in a surgical environment In order to increase the clinical acceptance of AR, we have adapted the Varioscope (Life Optics, Vienna), a miniature, cost-effective head-mounted operating binocular, for AR In this paper, we present the basic design of the modified HMD, and the method and results of an extensive laboratory study for photogrammetric calibration of the Varioscope's computer displays to a real-world scene In a series of 16 calibrations with varying zoom factors and object distances, mean calibration error was found to be 124 /spl plusmn/ 038 pixels or 012 /spl plusmn/ 005 mm for a 640 /spl times/ 480 display Maximum error accounted for 333 /spl plusmn/ 104 pixels or 033 /spl plusmn/ 012 mm The location of a position measurement probe of an optical tracking system was transformed to the display with an error of less than 1 mm in the real world in 56% of all cases For the remaining cases, error was below 2 mm We conclude that the accuracy achieved in our experiments is sufficient for a wide range of CAS applications

Journal ArticleDOI
TL;DR: Overall, multifrequency electrical impedance imaging appears promising for detecting breast malignancies, but improvements must be made before the method reaches its full potential.
Abstract: Electrical impedance spectroscopy (EIS) is a potential, noninvasive technique to image women for breast cancer. Studies have shown characteristic frequency dispersions in the electrical conductivity and permittivity of malignant versus normal tissue. Using a multifrequency EIS system, we imaged the breasts of 26 women. All patients had mammograms ranked using the American College of Radiology (ACR) BIRADS system. Of the 51 individual breasts imaged, 38 were ACR 1 negative, six had ACR 4-5 suspicious lesions, and seven had ACR 2 benign findings such as fibroadenomas or calcifications. A radially translatable circular array of 16 Ag/AgCl electrodes was placed around the breast while the patient lay prone. We applied trigonometric voltage patterns at ten frequencies between 10 and 950 kHz. Anatomically coronal images were reconstructed from this data using nonlinear partial differential equation methods. Typically, ACR 1-rated breasts were interrogated in a single central plane whereas ACR 2-5-rated breasts were imaged in multiple planes covering the region of suspicion. In general, a characteristic homogeneous image emerged for mammographically normal cases while focal inhomogeneities were observed in images from women with malignancies. Using a specific visual criterion, EIS images identified 83% of the ACR 4-5 lesions while 67% were detected using a numerical criterion. Overall, multifrequency electrical impedance imaging appears promising for detecting breast malignancies, but improvements must be made before the method reaches its full potential.

Journal ArticleDOI
TL;DR: This paper presents a new method for image-guided surgery called image-enhanced endoscopy, which provides surgeons with the ability to see beyond visible surfaces and, thus, provides additional exposure during surgery.
Abstract: This paper presents a new method for image-guided surgery called image-enhanced endoscopy. Registered real and virtual endoscopic images (perspective volume renderings generated from the same view as the endoscope camera using a preoperative image) are displayed simultaneously; when combined with the ability to vary tissue transparency in the virtual images, this provides surgeons with the ability to see beyond visible surfaces and, thus, provides additional exposure during surgery. A mount with four photoreflective spheres is rigidly attached to the endoscope and its position and orientation is tracked using an optical position sensor. Generation of virtual images that are accurately registered to the real endoscopic images requires calibration of the tracked endoscope. The calibration process determines intrinsic parameters (that represent the projection of three-dimensional points onto the two-dimensional endoscope camera imaging plane) and extrinsic parameters (that represent the transformation from the coordinate system of the tracker mount attached to the endoscope to the coordinate system of the endoscope camera), and determines radial lens distortion. The calibration routine is fast, automatic, accurate and reliable, and is insensitive to rotational orientation of the endoscope. The routine automatically detects, localizes, and identifies dots in a video image snapshot of the calibration target grid and determines the calibration parameters from the sets of known physical coordinates and localized image coordinates of the target grid dots. Using nonlinear lens-distortion correction, which can be performed at real-time rates (30 frames per second), the mean projection error is less than 0.5 mm at distances up to 25 mm from the endoscope tip, and less than 1.0 mm up to 45 mm. Experimental measurements and point-based registration error theory show that the tracking error is about 0.5-0.7 mm at the tip of the endoscope and less than 0.9 mm for all points in the field of view of the endoscope camera at a distance of up to 65 mm from the tip. It is probable that much of the projection error is due to endoscope tracking error rather than calibration error. Two examples of clinical applications are presented to illustrate the usefulness of image-enhanced endoscopy. This method is a useful addition to conventional image-guidance systems, which generally show only the position of the tip (and sometimes the orientation) of a surgical instrument or probe on reformatted image slices.

Journal ArticleDOI
TL;DR: This paper provides an efficient automatic means to extract the centerline and its associated branches (caused by a forceful touching of colon and small bowel or a deep fold in twisted colon lumen) and discusses its applications on fly-through path planning and endoscopic simulation.
Abstract: In this paper, we introduce a concise and concrete definition of an accurate colon centerline and provide an efficient automatic means to extract the centerline and its associated branches (caused by a forceful touching of colon and small bowel or a deep fold in twisted colon lumen). We further discuss its applications on fly-through path planning and endoscopic simulation, as well as its potential to solve the challenging touching and colon collapse problems in virtual colonoscopy. Experimental results demonstrated its centeredness, robustness, and efficiency.