scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 2006"


Journal ArticleDOI
TL;DR: In this paper, a method for automated segmentation of the vasculature in retinal images is presented, which produces segmentations by classifying each image pixel as vessel or non-vessel, based on the pixel's feature vector.
Abstract: We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods

1,435 citations


Journal ArticleDOI
TL;DR: This paper reviews ultrasound segmentation methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images, and presents a classification of methodology in terms of use of prior information.
Abstract: This paper reviews ultrasound segmentation methods, in a broad sense, focusing on techniques developed for medical B-mode ultrasound images. First, we present a review of articles by clinical application to highlight the approaches that have been investigated and degree of validation that has been done in different clinical domains. Then, we present a classification of methodology in terms of use of prior information. We conclude by selecting ten papers which have presented original ideas that have demonstrated particular clinical usefulness or potential specific to the ultrasound segmentation problem

1,150 citations


Journal ArticleDOI
TL;DR: An automated method for the segmentation of the vascular network in retinal images that outperforms other solutions and approximates the average accuracy of a human observer without a significant degradation of sensitivity and specificity is presented.
Abstract: This paper presents an automated method for the segmentation of the vascular network in retinal images. The algorithm starts with the extraction of vessel centerlines, which are used as guidelines for the subsequent vessel filling phase. For this purpose, the outputs of four directional differential operators are processed in order to select connected sets of candidate points to be further classified as centerline pixels using vessel derived features. The final segmentation is obtained using an iterative region growing method that integrates the contents of several binary images resulting from vessel width dependent morphological filters. Our approach was tested on two publicly available databases and its results are compared with recently published methods. The results demonstrate that our algorithm outperforms other solutions and approximates the average accuracy of a human observer without a significant degradation of sensitivity and specificity

900 citations


Journal ArticleDOI
TL;DR: Common overlap measures are generalized to measure the total overlap of ensembles of labels defined on multiple test images and account for fractional labels using fuzzy set theory to allow a single "figure-of-merit" to be reported which summarises the results of a complex experiment by image pair, by label or overall.
Abstract: Measures of overlap of labelled regions of images, such as the Dice and Tanimoto coefficients, have been extensively used to evaluate image registration and segmentation algorithms. Modern studies can include multiple labels defined on multiple images yet most evaluation schemes report one overlap per labelled region, simply averaged over multiple images. In this paper, common overlap measures are generalized to measure the total overlap of ensembles of labels defined on multiple test images and account for fractional labels using fuzzy set theory. This framework allows a single "figure-of-merit" to be reported which summarises the results of a complex experiment by image pair, by label or overall. A complementary measure of error, the overlap distance, is defined which captures the spatial extent of the nonoverlapping part and is related to the Hausdorff distance computed on grey level images. The generalized overlap measures are validated on synthetic images for which the overlap can be computed analytically and used as similarity measures in nonrigid registration of three-dimensional magnetic resonance imaging (MRI) brain images. Finally, a pragmatic segmentation ground truth is constructed by registering a magnetic resonance atlas brain to 20 individual scans, and used with the overlap measures to evaluate publicly available brain segmentation algorithms

680 citations


Journal ArticleDOI
TL;DR: A review of the literature on computer analysis of the lungs in CT scans is presented and addresses segmentation of various pulmonary structures, registration of chest scans, and applications aimed at detection, classification and quantification of chest abnormalities.
Abstract: Current computed tomography (CT) technology allows for near isotropic, submillimeter resolution acquisition of the complete chest in a single breath hold. These thin-slice chest scans have become indispensable in thoracic radiology, but have also substantially increased the data load for radiologists. Automating the analysis of such data is, therefore, a necessity and this has created a rapidly developing research area in medical imaging. This paper presents a review of the literature on computer analysis of the lungs in CT scans and addresses segmentation of various pulmonary structures, registration of chest scans, and applications aimed at detection, classification and quantification of chest abnormalities. In addition, research trends and challenges are identified and directions for future research are discussed.

553 citations


Journal ArticleDOI
TL;DR: Experimental data were used to compare images reconstructed by the standard iterative reconstruction software and the one modeling the response function, and the results showed that the modeling of the responsefunction improves both spatial resolution and noise properties.
Abstract: The quality of images reconstructed by statistical iterative methods depends on an accurate model of the relationship between image space and projection space through the system matrix. The elements of the system matrix for the clinical Hi-Rez scanner were derived by processing the data measured for a point source at different positions in a portion of the field of view. These measured data included axial compression and azimuthal interleaving of adjacent projections. Measured data were corrected for crystal and geometrical efficiency. Then, a whole system matrix was derived by processing the responses in projection space. Such responses included both geometrical and detection physics components of the system matrix. The response was parameterized to correct for point source location and to smooth for projection noise. The model also accounts for axial compression (span) used on the scanner. The forward projector for iterative reconstruction was constructed using the estimated response parameters. This paper extends our previous work to fully three-dimensional. Experimental data were used to compare images reconstructed by the standard iterative reconstruction software and the one modeling the response function. The results showed that the modeling of the response function improves both spatial resolution and noise properties

520 citations


Journal ArticleDOI
TL;DR: This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations.
Abstract: Reconstructing low-dose X-ray computed tomography (CT) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a Markov random field (MRF) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loeve (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging

519 citations


Journal ArticleDOI
TL;DR: A new likelihood ratio test that combines matched-filter responses, confidence measures and vessel boundary measures is presented, embedded into a vessel tracing framework, resulting in an efficient and effective vessel centerline extraction algorithm.
Abstract: Motivated by the goals of improving detection of low-contrast and narrow vessels and eliminating false detections at nonvascular structures, a new technique is presented for extracting vessels in retinal images. The core of the technique is a new likelihood ratio test that combines matched-filter responses, confidence measures and vessel boundary measures. Matched filter responses are derived in scale-space to extract vessels of widely varying widths. A vessel confidence measure is defined as a projection of a vector formed from a normalized pixel neighborhood onto a normalized ideal vessel profile. Vessel boundary measures and associated confidences are computed at potential vessel boundaries. Combined, these responses form a six-dimensional measurement vector at each pixel. A training technique is used to develop a mapping of this vector to a likelihood ratio that measures the "vesselness" at each pixel. Results comparing this vesselness measure to matched filters alone and to measures based on the Hessian of intensities show substantial improvements, both qualitatively and quantitatively. The Hessian can be used in place of the matched filter to obtain similar but less-substantial improvements or to steer the matched filter by preselecting kernel orientations. Finally, the new vesselness likelihood ratio is embedded into a vessel tracing framework, resulting in an efficient and effective vessel centerline extraction algorithm

359 citations


Journal ArticleDOI
TL;DR: It is shown how image contrast normalization can improve the ability to distinguish between MAs and other dots that occur on the retina.
Abstract: Screening programs using retinal photography for the detection of diabetic eye disease are being introduced in the U.K. and elsewhere. Automatic grading of the images is being considered by health boards so that the human grading task is reduced. Microaneurysms (MAs) are the earliest sign of this disease and so are very important for classifying whether images show signs of retinopathy. This paper describes automatic methods for MA detection and shows how image contrast normalization can improve the ability to distinguish between MAs and other dots that occur on the retina. Various methods for contrast normalization are compared. Best results were obtained with a method that uses the watershed transform to derive a region that contains no vessels or other lesions. Dots within vessels are handled successfully using a local vessel detection technique. Results are presented for detection of individual MAs and for detection of images containing MAs. Images containing MAs are detected with sensitivity 85.4% and specificity 83.1%

311 citations


Journal ArticleDOI
TL;DR: In vivo interobserver and interscan studies on low-dose data from eight clinical metastasis patients revealed that clinically significant volume change can be detected reliably and with negligible computation time by the presented methods.
Abstract: Volumetric growth assessment of pulmonary lesions is crucial to both lung cancer screening and oncological therapy monitoring. While several methods for small pulmonary nodules have previously been presented, the segmentation of larger tumors that appear frequently in oncological patients and are more likely to be complexly interconnected with lung morphology has not yet received much attention. We present a fast, automated segmentation method that is based on morphological processing and is suitable for both small and large lesions. In addition, the proposed approach addresses clinical challenges to volume assessment such as variations in imaging protocol or inspiration state by introducing a method of segmentation-based partial volume analysis (SPVA) that follows on the segmentation procedure. Accuracy and reproducibility studies were performed to evaluate the new algorithms. In vivo interobserver and interscan studies on low-dose data from eight clinical metastasis patients revealed that clinically significant volume change can be detected reliably and with negligible computation time by the presented methods. In addition, phantom studies were conducted. Based on the segmentation performed with the proposed method, the performance of the SPVA volumetry method was compared with the conventional technique on a phantom that was scanned with different dosages and reconstructed with varying parameters. Both systematic and absolute errors were shown to be reduced substantially by the SPVA method. The method was especially successful in accounting for slice thickness and reconstruction kernel variations, where the median error was more than halved in comparison to the conventional approach.

281 citations


Journal ArticleDOI
TL;DR: A novel Bayesian modeling approach for white matter tractography is presented, and the uncertainty associated with estimated white matter fiber paths is investigated, and a method for calculating the probability of a connection between two areas in the brain is introduced.
Abstract: White matter fiber bundles in the human brain can be located by tracing the local water diffusion in diffusion weighted magnetic resonance imaging (MRI) images. In this paper, a novel Bayesian modeling approach for white matter tractography is presented. The uncertainty associated with estimated white matter fiber paths is investigated, and a method for calculating the probability of a connection between two areas in the brain is introduced. The main merits of the presented methodology are its simple implementation and its ability to handle noise in a theoretically justified way. Theory for estimating global connectivity is also presented, as well as a theorem that facilitates the estimation of the parameters in a constrained tensor model of the local water diffusion profile

Journal ArticleDOI
TL;DR: An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented and the applicability of the framework can be extended to diseased brains and neonatal brains.
Abstract: An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented. A mixture model composed of a large number of Gaussians is used to represent the brain image. Each tissue is represented by a large number of Gaussian components to capture the complex tissue spatial layout. The intensity of a tissue is considered a global feature and is incorporated into the model through tying of all the related Gaussian parameters. The expectation-maximization (EM) algorithm is utilized to learn the parameter-tied, constrained Gaussian mixture model. An elaborate initialization scheme is suggested to link the set of Gaussians per tissue type, such that each Gaussian in the set has similar intensity characteristics with minimal overlapping spatial supports. Segmentation of the brain image is achieved by the affiliation of each voxel to the component of the model that maximized the a posteriori probability. The presented algorithm is used to segment three-dimensional, T1-weighted, simulated and real MR images of the brain into three different tissues, under varying noise conditions. Results are compared with state-of-the-art algorithms in the literature. The algorithm does not use an atlas for initialization or parameter learning. Registration processes are therefore not required and the applicability of the framework can be extended to diseased brains and neonatal brains

Journal ArticleDOI
TL;DR: A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM) by comparing the volume estimates with volumes obtained by semi-automatic segmentation of breast magnetic resonance imaging (MRI) data.
Abstract: A method is presented for estimation of dense breast tissue volume from mammograms obtained with full-field digital mammography (FFDM). The thickness of dense tissue mapping to a pixel is determined by using a physical model of image acquisition. This model is based on the assumption that the breast is composed of two types of tissue, fat and parenchyma. Effective linear attenuation coefficients of these tissues are derived from empirical data as a function of tube voltage (kVp), anode material, filtration, and compressed breast thickness. By employing these, tissue composition at a given pixel is computed after performing breast thickness compensation, using a reference value for fatty tissue determined by the maximum pixel value in the breast tissue projection. Validation has been performed using 22 FFDM cases acquired with a GE Senographe 2000D by comparing the volume estimates with volumes obtained by semi-automatic segmentation of breast magnetic resonance imaging (MRI) data. The correlation between MRI and mammography volumes was 0.94 on a per image basis and 0.97 on a per patient basis. Using the dense tissue volumes from MRI data as the gold standard, the average relative error of the volume estimates was 13.6%.

Journal ArticleDOI
TL;DR: An image-processing technique for rib suppression by means of a multiresolution massive training artificial neural network (MTANN) would be potentially useful for radiologists as well as for CAD schemes in detection of lung nodules on chest radiographs.
Abstract: When lung nodules overlap with ribs or clavicles in chest radiographs, it can be difficult for radiologists as well as computer-aided diagnostic (CAD) schemes to detect these nodules. In this paper, we developed an image-processing technique for suppressing the contrast of ribs and clavicles in chest radiographs by means of a multiresolution massive training artificial neural network (MTANN). An MTANN is a highly nonlinear filter that can be trained by use of input chest radiographs and the corresponding "teaching" images. We employed "bone" images obtained by use of a dual-energy subtraction technique as the teaching images. For effective suppression of ribs having various spatial frequencies, we developed a multiresolution MTANN consisting of multiresolution decomposition/composition techniques and three MTANNs for three different-resolution images. After training with input chest radiographs and the corresponding dual-energy bone images, the multiresolution MTANN was able to provide "bone-image-like" images which were similar to the teaching bone images. By subtracting the bone-image-like images from the corresponding chest radiographs, we were able to produce "soft-tissue-image-like" images where ribs and clavicles were substantially suppressed. We used a validation test database consisting of 118 chest radiographs with pulmonary nodules and an independent test database consisting of 136 digitized screen-film chest radiographs with 136 solitary pulmonary nodules collected from 14 medical institutions in this study. When our technique was applied to nontraining chest radiographs, ribs and clavicles in the chest radiographs were suppressed substantially, while the visibility of nodules and lung vessels was maintained. Thus, our image-processing technique for rib suppression by means of a multiresolution MTANN would be potentially useful for radiologists as well as for CAD schemes in detection of lung nodules on chest radiographs.

Journal ArticleDOI
Ye Xu1, Milan Sonka1, G. McLennan1, Junfeng Guo1, Eric A. Hoffman1 
TL;DR: 3-D AMFM analysis of lung parenchyma improves discrimination and may provide a means of discriminating subtle differences between smokers and nonsmokers both with normal PFTs.
Abstract: Our goal is to enhance the ability to differentiate normal lung from subtle pathologies via multidetector row CT (MDCT) by extending a two-dimensional (2-D) texturebased tissue classification [adaptive multiple feature method (AMFM)] to use three-dimensional (3-D) texture features. We performed MDCT on 34 humans and classified volumes of interest (VOIs) in the MDCT images into five categories: EC, emphysema in severe chronic obstructive pulmonary disease (COPD); MC, mild emphysema in mild COPD; NC, normal appearing lung in mild COPD; NN, normal appearing lung in normal nonsmokers; and NS, normal appearing lung in normal smokers. COPD severity was based upon pulmonary function tests (PFTs). Airways and vessels were excluded from VOIs; 24 3-D texture features were calculated; and a Bayesian classifier was used for discrimination. A leave-one-out method was employed for validation. Sensitivity of the four-class classification in the form of 3-D/2-D was: EC: 85%/71%, MC: 90%/82%; NC: 88%/50%; NN: 100%/60%. Sensitivity and specificity for NN using a two-class classification of NN and NS in the form of 3-D/2-D were: 99%/72% and 100%/75%, respectively. We conclude that 3-D AMFM analysis of lung parenchyma improves discrimination compared to 2-D AMFM of the same VOIs. Furthermore, our results suggest that the 3-D AMFM may provide a means of discriminating subtle differences between smokers and nonsmokers both with normal PFTs.

Journal ArticleDOI
TL;DR: Results indicate that this proactive model, which integrates a priori knowledge on the cardiac anatomy and on its dynamical behavior, can improve the accuracy and robustness of the extraction of functional parameters from cardiac images even in the presence of noisy or sparse data.
Abstract: This paper presents a new three-dimensional electromechanical model of the two cardiac ventricles designed both for the simulation of their electrical and mechanical activity, and for the segmentation of time series of medical images. First, we present the volumetric biomechanical models built. Then the transmembrane potential propagation is simulated, based on FitzHugh-Nagumo reaction-diffusion equations. The myocardium contraction is modeled through a constitutive law including an electromechanical coupling. Simulation of a cardiac cycle, with boundary conditions representing blood pressure and volume constraints, leads to the correct estimation of global and local parameters of the cardiac function. This model enables the introduction of pathologies and the simulation of electrophysiology interventions. Moreover, it can be used for cardiac image analysis. A new proactive deformable model of the heart is introduced to segment the two ventricles in time series of cardiac images. Preliminary results indicate that this proactive model, which integrates a priori knowledge on the cardiac anatomy and on its dynamical behavior, can improve the accuracy and robustness of the extraction of functional parameters from cardiac images even in the presence of noisy or sparse data. Such a model also allows the simulation of cardiovascular pathologies in order to test therapy strategies and to plan interventions.

Journal ArticleDOI
TL;DR: An X-ray system with a large area detector has high scatter-to-primary ratios (SPRs), which result in severe artifacts in reconstructed computed tomography (CT) images, and a scatter correction algorithm is introduced that provides effective scatter correction but does not require additional patient exposure.
Abstract: An X-ray system with a large area detector has high scatter-to-primary ratios (SPRs), which result in severe artifacts in reconstructed computed tomography (CT) images. A scatter correction algorithm is introduced that provides effective scatter correction but does not require additional patient exposure. The key hypothesis of the algorithm is that the high-frequency components of the X-ray spatial distribution do not result in strong high-frequency signals in the scatter. A calibration sheet with a checkerboard pattern of semitransparent blockers (a "primary modulator") is inserted between the X-ray source and the object. The primary distribution is partially modulated by a high-frequency function, while the scatter distribution still has dominant low-frequency components, based on the hypothesis. Filtering and demodulation techniques suffice to extract the low-frequency components of the primary and hence obtain the scatter estimation. The hypothesis was validated using Monte Carlo (MC) simulation, and the algorithm was evaluated by both MC simulations and physical experiments. Reconstructions of a software humanoid phantom suggested system parameters in the physical implementation and showed that the proposed method reduced the relative mean square error of the reconstructed image in the central region of interest from 74.2% to below 1%. In preliminary physical experiments on the standard evaluation phantom, this error was reduced from 31.8% to 2.3%, and it was also demonstrated that the algorithm has no noticeable impact on the resolution of the reconstructed image in spite of the filter-based approach. Although the proposed scatter correction technique was implemented for X-ray CT, it can also be used in other X-ray imaging applications, as long as a primary modulator can be inserted between the X-ray source and the imaged object

Journal ArticleDOI
TL;DR: Experimental results by using both synthesized and real data show the good performance of the proposed model in segmenting prostates from ultrasound images.
Abstract: This paper presents a novel deformable model for automatic segmentation of prostates from three-dimensional ultrasound images, by statistical matching of both shape and texture. A set of Gabor-support vector machines (G-SVMs) are positioned on different patches of the model surface, and trained to adaptively capture texture priors of ultrasound images for differentiation of prostate and nonprostate tissues in different zones around prostate boundary. Each G-SVM consists of a Gabor filter bank for extraction of rotation-invariant texture features and a kernel support vector machine for robust differentiation of textures. In the deformable segmentation procedure, these pretrained G-SVMs are used to tentatively label voxels around the surface of deformable model as prostate or nonprostate tissues by a statistical texture matching. Subsequently, the surface of deformable model is driven to the boundary between the tentatively labeled prostate and nonprostate tissues. Since the step of tissue labeling and the step of label-based surface deformation are dependent on each other, these two steps are repeated until they converge. Experimental results by using both synthesized and real data show the good performance of the proposed model in segmenting prostates from ultrasound images.

Journal ArticleDOI
TL;DR: The construction of 20 realistic digital brain phantoms that can be used to simulate medical imaging data are presented and it is demonstrated that these variations are small when real data are corrected for intensity nonuniformity.
Abstract: Simulations provide a way of generating data where ground truth is known, enabling quantitative testing of image processing methods. In this paper, we present the construction of 20 realistic digital brain phantoms that can be used to simulate medical imaging data. The phantoms are made from 20 normal adults to take into account intersubject anatomical variabilities. Each digital brain phantom was created by registering and averaging four T1, T2, and proton density (PD)-weighted magnetic resonance imaging (MRI) scans from each subject. A fuzzy minimum distance classification was used to classify voxel intensities from T1, T2, and PD average volumes into grey-matter, white matter, cerebro-spinal fluid, and fat. Automatically generated mask volumes were required to separate brain from nonbrain structures and ten fuzzy tissue volumes were created: grey matter, white matter, cerebro-spinal fluid, skull, marrow within the bone, dura, fat, tissue around the fat, muscles, and skin/muscles. A fuzzy vessel class was also obtained from the segmentation of the magnetic resonance angiography scan of the subject. These eleven fuzzy volumes that describe the spatial distribution of anatomical tissues define the digital phantom, where voxel intensity is proportional to the fraction of tissue within the voxel. These fuzzy volumes can be used to drive simulators for different modalities including MRI, PET, or SPECT. These phantoms were used to construct 20 simulated T1-weighted MR scans. To evaluate the realism of these simulations, we propose two approaches to compare them to real data acquired with the same acquisition parameters. The first approach consists of comparing the intensities within the segmented classes in both real and simulated data. In the second approach, a whole brain voxel-wise comparison between simulations and real T1-weighted data is performed. The first comparison underlines that segmented classes appear to properly represent the anatomy on average, and that inside these classes, the simulated and real intensity values are quite similar. The second comparison enables the study of the regional variations with no a priori class. The experiments demonstrate that these variations are small when real data are corrected for intensity nonuniformity

Journal ArticleDOI
TL;DR: Methods for measuring the change in nodule size from two computed tomography image scans recorded at different times are presented; from this size change the growth rate may be established and isotropic resampling is shown to improve measurement accuracy.
Abstract: The pulmonary nodule is the most common manifestation of lung cancer, the most deadly of all cancers. Most small pulmonary nodules are benign, however, and currently the growth rate of the nodule provides for one of the most accurate noninvasive methods of determining malignancy. In this paper, we present methods for measuring the change in nodule size from two computed tomography image scans recorded at different times; from this size change the growth rate may be established. The impact of partial voxels for small nodules is evaluated and isotropic resampling is shown to improve measurement accuracy. Methods for nodule location and sizing, pleural segmentation, adaptive thresholding, image registration, and knowledge-based shape matching are presented. The latter three techniques provide for a significant improvement in volume change measurement accuracy by considering both image scans simultaneously. Improvements in segmentation are evaluated by measuring volume changes in benign or slow growing nodules. In the analysis of 50 nodules, the variance in percent volume change was reduced from 11.54% to 9.35% (p=0.03) through the use of registration, adaptive thresholding, and knowledge-based shape matching.

Journal ArticleDOI
TL;DR: The use of an anatomic pulmonary atlas, encoded with a priori information on the pulmonary anatomy, to automatically segment the oblique lobar fissures is demonstrated.
Abstract: High-resolution X-ray computed tomography (CT) imaging is routinely used for clinical pulmonary applications. Since lung function varies regionally and because pulmonary disease is usually not uniformly distributed in the lungs, it is useful to study the lungs on a lobe-by-lobe basis. Thus, it is important to segment not only the lungs, but the lobar fissures as well. In this paper, we demonstrate the use of an anatomic pulmonary atlas, encoded with a priori information on the pulmonary anatomy, to automatically segment the oblique lobar fissures. Sixteen volumetric CT scans from 16 subjects are used to construct the pulmonary atlas. A ridgeness measure is applied to the original CT images to enhance the fissure contrast. Fissure detection is accomplished in two stages: an initial fissure search and a final fissure search. A fuzzy reasoning system is used in the fissure search to analyze information from three sources: the image intensity, an anatomic smoothness constraint, and the atlas-based search initialization. Our method has been tested on 22 volumetric thin-slice CT scans from 12 subjects, and the results are compared to manual tracings. Averaged across all 22 data sets, the RMS error between the automatically segmented and manually segmented fissures is 1.96/spl plusmn/0.71 mm and the mean of the similarity indices between the manually defined and computer-defined lobe regions is 0.988. The results indicate a strong agreement between the automatic and manual lobe segmentations.

Journal ArticleDOI
TL;DR: This work respiratory-gating the PET data and correcting it for motion with optical flow algorithms contains all the PET information and minimal motion and, thus, allows more accurate attenuation correction and quantification.
Abstract: Motion is a source of degradation in positron emission tomography (PET)/computed tomography (CT) images. As the PET images represent the sum of information over the whole respiratory cycle, attenuation correction with the help of CT images may lead to false staging or quantification of the radioactive uptake especially in the case of small tumors. We present an approach avoiding these difficulties by respiratory-gating the PET data and correcting it for motion with optical flow algorithms. The resulting dataset contains all the PET information and minimal motion and, thus, allows more accurate attenuation correction and quantification.

Journal ArticleDOI
TL;DR: A novel nonlinear multiscale wavelet diffusion method for ultrasound speckle suppression and edge enhancement designed to utilize the favorable denoising properties of two frequently used techniques: the sparsity and multiresolution properties of the wavelet and the iterative edge enhancement feature of nonlinear diffusion.
Abstract: This paper introduces a novel nonlinear multiscale wavelet diffusion method for ultrasound speckle suppression and edge enhancement. This method is designed to utilize the favorable denoising properties of two frequently used techniques: the sparsity and multiresolution properties of the wavelet, and the iterative edge enhancement feature of nonlinear diffusion. With fully exploited knowledge of speckle image models, the edges of images are detected using normalized wavelet modulus. Relying on this feature, both the envelope-detected speckle image and the log-compressed ultrasonic image can be directly processed by the algorithm without need for additional preprocessing. Speckle is suppressed by employing the iterative multiscale diffusion on the wavelet coefficients. With a tuning diffusion threshold strategy, the proposed method can improve the image quality for both visualization and auto-segmentation applications. We validate our method using synthetic speckle images and real ultrasonic images. Performance improvement over other despeckling filters is quantified in terms of noise suppression and edge preservation indices.

Journal ArticleDOI
TL;DR: A new approach for improving the resolution of PET images using a super-resolution method has been developed and experimentally confirmed, employing a clinical scanner and the improvement in axial resolution requires no changes in hardware.
Abstract: This paper demonstrates a super-resolution method for improving the resolution in clinical positron emission tomography (PET) scanners. Super-resolution images were obtained by combining four data sets with spatial shifts between consecutive acquisitions and applying an iterative algorithm. Super-resolution attenuation corrected PET scans of a phantom were obtained using the two-dimensional and three-dimensional (3-D) acquisition modes of a clinical PET/computed tomography (CT) scanner (Discovery LS, GEMS). In a patient study, following a standard /sup 18/F-FDG PET/CT scan, a super-resolution scan around one small lesion was performed using axial shifts without increasing the patient radiation exposure. In the phantom study, smaller features (3 mm) could be resolved axially with the super-resolution method than without (6 mm). The super-resolution images had better resolution than the original images and provided higher contrast ratios in coronal images and in 3-D acquisition transaxial images. The coronal super-resolution images had superior resolution and contrast ratios compared to images reconstructed by merely interleaving the data to the proper axial location. In the patient study, super-resolution reconstructions displayed a more localized /sup 18/F-FDG uptake. A new approach for improving the resolution of PET images using a super-resolution method has been developed and experimentally confirmed, employing a clinical scanner. The improvement in axial resolution requires no changes in hardware.

Journal ArticleDOI
TL;DR: It is shown that vascular structures of the human retina represent geometrical multifractals, characterized by a hierarchy of exponents rather then a single fractal dimension.
Abstract: In this paper, it is shown that vascular structures of the human retina represent geometrical multifractals, characterized by a hierarchy of exponents rather then a single fractal dimension. A number of retinal images from the STARE database are analyzed, corresponding to both normal and pathological states of the retina. In all studied cases, a clearly multifractal behavior is observed, where capacity dimension is always found to be larger then the information dimension, which is in turn always larger then the correlation dimension, all the three being significantly lower then the diffusion limited aggregation (DLA) fractal dimension. We also observe a tendency of images corresponding to the pathological states of the retina to have lower generalized dimensions and a shifted spectrum range, in comparison with the normal cases

Journal ArticleDOI
TL;DR: The purpose of this study is to provide physicians with a three-dimensional (3-D) model of coronary arteries, e.g., for absolute 3-D measures for lesion assessment, instead of direct projective measures deduced from the images, which are highly dependent on the viewing angle.
Abstract: Cardiovascular diseases remain the primary cause of death in developed countries. In most cases, exploration of possibly underlying coronary artery pathologies is performed using X-ray coronary angiography. Current clinical routine in coronary angiography is directly conducted in two-dimensional projection images from several static viewing angles. However, for diagnosis and treatment purposes, coronary artery reconstruction is highly suitable. The purpose of this study is to provide physicians with a three-dimensional (3-D) model of coronary arteries, e.g., for absolute 3-D measures for lesion assessment, instead of direct projective measures deduced from the images, which are highly dependent on the viewing angle. In this paper, we propose a novel method to reconstruct coronary arteries from one single rotational X-ray projection sequence. As a side result, we also obtain an estimation of the coronary artery motion. Our method consists of three main consecutive steps: 1) 3-D reconstruction of coronary artery centerlines, including respiratory motion compensation; 2) coronary artery four-dimensional motion computation; 3) 3-D tomographic reconstruction of coronary arteries, involving compensation for respiratory and cardiac motions. We present some experiments on clinical datasets, and the feasibility of a true 3-D Quantitative Coronary Analysis is demonstrated.

Journal ArticleDOI
TL;DR: It is found that at low exposure levels typical of those being considered for screening CT, the Poisson-likelihood based approaches outperform the PWLS objective as well as a standard approach based on adaptive filtering followed by deconvolution.
Abstract: We formulate computed tomography (CT) sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. CT measurement data are degraded by a number of factors-including beam hardening and off-focal radiation-that produce artifacts in reconstructed images unless properly corrected. Currently, such effects are addressed by a sequence of sinogram-preprocessing steps, including deconvolution corrections for off-focal radiation, that have the potential to amplify noise. Noise itself is generally mitigated through apodization of the reconstruction kernel, which effectively ignores the measurement statistics, although in high-noise situations adaptive filtering methods that loosely model data statistics are sometimes applied. As an alternative, we present a general imaging model relating the degraded measurements to the sinogram of ideal line integrals and propose to estimate these line integrals by iteratively optimizing a statistically based objective function. We consider three different strategies for estimating the set of ideal line integrals, one based on direct estimation of ideal "monochromatic" line integrals that have been corrected for single-material beam hardening, one based on estimation of ideal "polychromatic" line integrals that can be readily mapped to monochromatic line integrals, and one based on estimation of ideal transmitted intensities, from which ideal, monochromatic line integrals can be readily estimated. The first two approaches involve maximization of a penalized Poisson-likelihood objective function while the third involves minimization of a quadratic penalized weighted least squares (PWLS) objective applied in the transmitted intensity domain. We find that at low exposure levels typical of those being considered for screening CT, the Poisson-likelihood based approaches outperform the PWLS objective as well as a standard approach based on adaptive filtering followed by deconvolution. At higher exposure levels, the approaches all perform similarly

Journal ArticleDOI
TL;DR: An approach to the voxel-wise optimization of regional mutual information (RMI) is derived and used to drive a viscous fluid deformation model between images in a symmetric registration process, which shows a significant reduction in errors when tissue contrast changes locally between acquisitions.
Abstract: This paper is motivated by the analysis of serial structural magnetic resonance imaging (MRI) data of the brain to map patterns of local tissue volume loss or gain over time, using registration-based deformation tensor morphometry. Specifically, we address the important confound of local tissue contrast changes which can be induced by neurodegenerative or neurodevelopmental processes. These not only modify apparent tissue volume, but also modify tissue integrity and its resulting MRI contrast parameters. In order to address this confound we derive an approach to the voxel-wise optimization of regional mutual information (RMI) and use this to drive a viscous fluid deformation model between images in a symmetric registration process. A quantitative evaluation of the method when compared to earlier approaches is included using both synthetic data and clinical imaging data. Results show a significant reduction in errors when tissue contrast changes locally between acquisitions. Finally, examples of applying the technique to map different patterns of atrophy rate in different neurodegenerative conditions is included.

Journal ArticleDOI
TL;DR: The detector blurring component of the system model for a whole body positron emission tomography (PET) system is explored and extended into a more general system response function to account for other system effects including the influence of Fourier rebinning (FORE).
Abstract: Appropriate application of spatially variant system models can correct for degraded resolution response and mispositioning errors. This paper explores the detector blurring component of the system model for a whole body positron emission tomography (PET) system and extends this factor into a more general system response function to account for other system effects including the influence of Fourier rebinning (FORE). We model the system response function as a three-dimensional (3-D) function that blurs in the radial and axial dimension and is spatially variant in radial location. This function is derived from Monte Carlo simulations and incorporates inter-crystal scatter, crystal penetration, and the blurring due to the FORE algorithm. The improved system model is applied in a modified ordered subsets expectation maximization (OSEM) algorithm to reconstruct images from rebinned, fully 3-D PET data. The proposed method effectively removes the spatial variance in the resolution response, as shown in simulations of point sources. Furthermore, simulation and measured studies show the proposed method improves quantitative accuracy with a reduction in tumor bias compared to conventional OSEM on the order of 10%-30% depending on tumor size and smoothing parameter

Journal ArticleDOI
TL;DR: Statistical inversion allows stable solution of the limited-angle tomography problem by complementing the measurement data by a priori information by using a Besov space prior distribution together with positivity constraint.
Abstract: The aim of X-ray tomography is to reconstruct an unknown physical body from a collection of projection images. When the projection images are only available from a limited angle of view, the reconstruction problem is a severely ill-posed inverse problem. Statistical inversion allows stable solution of the limited-angle tomography problem by complementing the measurement data by a priori information. In this work, the unknown attenuation distribution inside the body is represented as a wavelet expansion, and a Besov space prior distribution together with positivity constraint is used. The wavelet expansion is thresholded before reconstruction to reduce the dimension of the computational problem. Feasibility of the method is demonstrated by numerical examples using in vitro data from mammography and dental radiology.