scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 2001"


Journal ArticleDOI
TL;DR: The authors propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations.
Abstract: The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation-no spatial information is taken into account. This causes the FM model to work only on well-defined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM model-based methods produce unreliable results. Here, the authors propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM model-based approach. To fit the HMRF model, an EM algorithm is used. The authors show that by incorporating both the HMRF model and the EM algorithm into a HMRF-EM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRF-EM framework can easily be combined with other techniques. As an example, the authors show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a three-dimensional fully automated approach for brain MR image segmentation.

6,335 citations


Journal ArticleDOI
TL;DR: The authors have developed a technique for taking a model of the cortex, detecting and fixing the topological defects while leaving that majority of the model intact, resulting in a surface that is both geometrically accurate and topologically correct.
Abstract: Highly accurate surface models of the cerebral cortex are becoming increasingly important as tools in the investigation of the functional organization of the human brain. The construction of such models is difficult using current neuroimaging technology due to the high degree of cortical folding. Even single voxel mis-classifications can result in erroneous connections being created between adjacent banks of a sulcus, resulting in a topologically inaccurate model. These topological defects cause the cortical model to no longer be homeomorphic to a sheet, preventing the accurate inflation, flattening, or spherical morphing of the reconstructed cortex. Surface deformation techniques can guarantee the topological correctness of a model, but are time-consuming and may result in geometrically inaccurate models. In order to address this need the authors have developed a technique for taking a model of the cortex, detecting and fixing the topological defects while leaving that majority of the model intact, resulting in a surface that is both geometrically accurate and topologically correct.

1,629 citations


Journal ArticleDOI
TL;DR: A sequence of morphological operations is used to smooth the irregular boundary along the mediastinum in order to obtain results consistent with these obtained by manual analysis, in which only the most central pulmonary arteries are excluded from the lung region.
Abstract: Segmentation of pulmonary X-ray computed tomography (CT) images is a precursor to most pulmonary image analysis applications. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3-D) pulmonary X-ray CT images. The method has three main steps. First, the lung region is extracted from the CT images by gray-level thresholding. Then, the left and right lungs are separated by identifying the anterior and posterior junctions by dynamic programming. Finally, a sequence of morphological operations is used to smooth the irregular boundary along the mediastinum in order to obtain results consistent with these obtained by manual analysis, in which only the most central pulmonary arteries are excluded from the lung region. The method has been tested by processing 3-D CT data sets from eight normal subjects, each imaged three times at biweekly intervals with lungs at 90% vital capacity. The authors present results by comparing their automatic method to manually traced borders from two image analysts. Averaged over all volumes, the root mean square difference between the computer and human analysis is 0.8 pixels (0.54 mm). The mean intrasubject change in tissue content over the three scans was 2.75%/spl plusmn/2.29% (mean/spl plusmn/standard deviation).

1,013 citations


Journal ArticleDOI
TL;DR: The results show that the joint estimation of a consistent set of forward and reverse transformations constrained by linear-elasticity give better registration results than using either constraint alone or none at all.
Abstract: Presents a new method for image registration based on jointly estimating the forward and reverse transformations between two images while constraining these transforms to be inverses of one another. This approach produces a consistent set of transformations that have less pairwise registration error, i.e., better correspondence, than traditional methods that estimate the forward and reverse transformations independently. The transformations are estimated iteratively and are restricted to preserve topology by constraining them to obey the laws of continuum mechanics. The transformations are parameterized by a Fourier series to diagonalize the covariance structure imposed by the continuum mechanics constraints and to provide a computationally efficient numerical implementation. Results using a linear elastic material constraint are presented using both magnetic resonance and X-ray computed tomography image data. The results show that the joint estimation of a consistent set of forward and reverse transformations constrained by linear-elasticity give better registration results than using either constraint alone or none at all.

697 citations


Journal ArticleDOI
TL;DR: One method, the preservation of principal direction algorithm, which takes into account shearing, stretching and rigid rotation, is shown to be the most effective and improve the consistency between registered and target images over naive warping algorithms.
Abstract: The authors address the problem of applying spatial transformations (or "image warps") to diffusion tensor magnetic resonance images. The orientational information that these images contain must be handled appropriately when they are transformed spatially during image registration. The authors present solutions for global transformations of three-dimensional images up to 12-parameter affine complexity and indicate how their methods can be extended for higher order transformations. Several approaches are presented and tested using synthetic data. One method, the preservation of principal direction algorithm, which takes into account shearing, stretching and rigid rotation, is shown to be the most effective. Additional registration experiments are performed on human brain data obtained from a single subject, whose head was imaged in three different orientations within the scanner. All of the authors' methods improve the consistency between registered and target images over naive warping algorithms.

682 citations


Journal ArticleDOI
TL;DR: 3-D model-based approaches have the capability to improve the diagnostic value of cardiac images, but issues as robustness, 3-D interaction, computational complexity and clinical validation still require significant attention.
Abstract: Three-dimensional (3-D) imaging of the heart is a rapidly developing area of research in medical imaging. Advances in hardware and methods for fast spatio-temporal cardiac imaging are extending the frontiers of clinical diagnosis and research on cardiovascular diseases. In the last few years, many approaches have been proposed to analyze images and extract parameters of cardiac shape and function from a variety of cardiac imaging modalities. In particular, techniques based on spatio-temporal geometric models have received considerable attention. This paper surveys the literature of two decades of research on cardiac modeling. The contribution of the paper is three-fold: (1) to serve as a tutorial of the field for both clinicians and technologists, (2) to provide an extensive account of modeling techniques in a comprehensive and systematic manner, and (3) to critically review these approaches in terms of their performance and degree of clinical evaluation with respect to the final goal of cardiac functional analysis. From this review it is concluded that whereas 3-D model-based approaches have the capability to improve the diagnostic value of cardiac images, issues as robustness, 3-D interaction, computational complexity and clinical validation still require significant attention.

625 citations


Journal ArticleDOI
TL;DR: A novel speckle suppression method for medical ultrasound images that uses the alpha-stable model to develop a blind noise-removal processor that performs a nonlinear operation on the data and designs a Bayesian estimator that exploits these statistics.
Abstract: A novel speckle suppression method for medical ultrasound images is presented. First, the logarithmic transform of the original image is analyzed into the multiscale wavelet domain. The authors show that the subband decompositions of ultrasound images have significantly non-Gaussian statistics that are best described by families of heavy-tailed distributions such as the alpha-stable. Then, the authors design a Bayesian estimator that exploits these statistics. They use the alpha-stable model to develop a blind noise-removal processor that performs a nonlinear operation on the data. Finally, the authors compare their technique with current state-of-the-art soft and hard thresholding methods applied on actual ultrasound medical images and they quantify the achieved performance improvement.

603 citations


Journal ArticleDOI
TL;DR: A system for the computerized analysis of images obtained from ELM to enhance the early recognition of malignant melanoma and delivers a sensitivity of 87% with a specificity of 92%.
Abstract: A system for the computerized analysis of images obtained from epiluminescence microscopy (ELM) has been developed to enhance the early recognition of malignant melanoma. As an initial step, the binary mask of the skin lesion is determined by several basic segmentation algorithms together with a fusion strategy. A set of features containing shape and radiometric features as well as local and global parameters is calculated to describe the malignancy of a lesion. Significant features are then selected from this set by application of statistical feature subset selection methods. The final kNN classification delivers a sensitivity of 87% with a specificity of 92%.

594 citations


Journal ArticleDOI
TL;DR: A fully automated algorithm for segmentation of multiple sclerosis lesions from multispectral magnetic resonance (MR) images that performs intensity-based tissue classification using a stochastic model and simultaneously detects MS lesions as outliers that are not well explained by the model.
Abstract: This paper presents a fully automated algorithm for segmentation of multiple sclerosis (MS) lesions from multispectral magnetic resonance (MR) images. The method performs intensity-based tissue classification using a stochastic model for normal brain images and simultaneously detects MS lesions as outliers that are not well explained by the model. It corrects for MR field inhomogeneities, estimates tissue-specific intensity models from the data itself, and incorporates contextual information in the classification using a Markov random field. The results of the automated method are compared with lesion delineations by human experts, showing a high total lesion load correlation. When the degree of spatial correspondence between segmentations is taken into account, considerable disagreement is found, both between expect segmentations, and between expert and automatic measurements.

539 citations


Journal ArticleDOI
TL;DR: The purpose of this survey is to categorize and briefly review the literature on computer analysis of chest images, which comprises over 150 papers published in the last 30 years and some directions for future research are given.
Abstract: The traditional chest radiograph is still ubiquitous in clinical practice, and will likely remain so for quite some time. Yet, its interpretation is notoriously difficult. This explains the continued interest in computer-aided diagnosis for chest radiography. The purpose of this survey is to categorize and briefly review the literature on computer analysis of chest images, which comprises over 150 papers published in the last 30 years. Remaining challenges are indicated and some directions for future research are given.

524 citations


Journal ArticleDOI
TL;DR: The authors' present results show that their scheme can be regarded as a technique for CAD systems to detect nodules in helical CT pulmonary images.
Abstract: The purpose of this study is to develop a technique for computer-aided diagnosis (CAD) systems to detect lung nodules in helical X-ray pulmonary computed tomography (CT) images. The authors propose a novel template-matching technique based on a genetic algorithm (GA) template matching (GATM) for detecting nodules existing within the lung area; the GA was used to determine the target position in the observed image efficiently and to select an adequate template image from several reference patterns for quick template matching. In addition, a conventional template matching was employed to detect nodules existing on the lung wall area, lung wall template matching (LWTM), where semicircular models were used as reference patterns; the semicircular models were rotated according to the angle of the target point on the contour of the lung wall. After initial detecting candidates using the two template-matching methods, the authors extracted a total of 13 feature values and used them to eliminate false-positive findings. Twenty clinical cases involving a total of 557 sectional images were used in this study. 71 nodules out of 98 were correctly detected by the authors' scheme (i.e., a detection rate of about 72%), with the number of false positives at approximately 1.1/sectional image. The authors' present results show that their scheme can be regarded as a technique for CAD systems to detect nodules in helical CT pulmonary images.

Journal ArticleDOI
TL;DR: A new iterative maximum-likelihood reconstruction algorithm for X-ray computed tomography prevents beam hardening artifacts by incorporating a polychromatic acquisition model and preliminary results indicate that metal artifact reduction is a very promising application.
Abstract: A new iterative maximum-likelihood reconstruction algorithm for X-ray computed tomography is presented. The algorithm prevents beam hardening artifacts by incorporating a polychromatic acquisition model. The continuous spectrum of the X-ray tube is modeled as a number of discrete energies. The energy dependence of the attenuation is taken into account by decomposing the linear attenuation coefficient into a photoelectric component and a Compton scatter component. The relative weight of these components is constrained based on prior material assumptions. Excellent results are obtained for simulations and for phantom measurements. Beam-hardening artifacts are effectively eliminated. The relation with existing algorithms is discussed. The results confirm that improving the acquisition model assumed by the reconstruction algorithm results in reduced artifacts. Preliminary results indicate that metal artifact reduction is a very promising application for this new algorithm.

Journal ArticleDOI
TL;DR: The authors consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines and uses a semi-automatic approach based on three-dimensional (3-D) differential operators to localize landmarks.
Abstract: The authors consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. The authors' approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks the authors use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.

Journal ArticleDOI
TL;DR: An approximation to the distribution of target registration error (TRE) is derived; this is an extension of previous work that gave the expected squared value of TRE and it is shown that the authors' theoretical results are a close match to the simulated ones.
Abstract: Guidance systems designed for neurosurgery, hip surgery, spine surgery and for approaches to other anatomy that is relatively rigid can use rigid-body transformations to accomplish image registration. These systems often rely on point-based registration to determine the transformation and many such systems use attached fiducial markers to establish accurate fiducial points for the registration, the points being established by some fiducial localization process. Accuracy is important to these systems, as is knowledge of the level of that accuracy. An advantage of marker-based systems, particularly those in which the markers are bone-implanted, is that registration error depends only on the fiducial localization and is, thus, to a large extent independent of the particular object being registered. Thus, it should be possible to predict the clinical accuracy of marker-based systems on the basis of experimental measurements made with phantoms or previous patients. For most registration tasks, the most important error measure is target registration error (TRE), which is the distance after registration between corresponding points not used in calculating the registration transform. Here, the authors derive an approximation to the distribution of TRE; this is an extension of previous work that gave the expected squared value of TRE. They show the distribution of the squared magnitude of TRE and that of the component of TRE in an arbitrary direction. Using numerical simulations, the authors show that their theoretical results are a close match to the simulated ones.

Journal ArticleDOI
TL;DR: A 3-D computer-aided diagnosis scheme for automated detection of colonic polyps in computed tomography (CT) colonographic data sets was developed and its performance based on colonoscopy as the gold standard was evaluated.
Abstract: We have developed a three-dimensional (3-D) computer-aided diagnosis scheme for automated detection of colonic polyps in computed tomography (CT) colonographic data sets, and assessed its performance based on colonoscopy as the gold standard In this scheme, a thick region encompassing the entire colonic wall is extracted from an isotropic volume reconstructed from the CT images in CT colonography Polyp candidates are detected by first computing of 3-D geometric features that characterize polyps, folds, and colonic walls at each voxel in the extracted colon, and then segmenting of connected components corresponding to suspicious regions by hysteresis thresholding based on these geometric features We apply fuzzy clustering to these connected components to obtain the polyp candidates False-positive (FP) detections are then reduced by computation of several 3-D volumetric features characterizing the internal structures of the polyp candidates, followed by the application of discriminant analysis to the feature space generated by these volumetric features The locations of the polyps detected by our computerized method were compared to the gold standard of conventional colonoscopy The performance was evaluated based on 43 clinical cases, including 12 polyps determined by colonoscopy Our computerized scheme was shown to have the potential to detect polyps in CT colonography with a clinically acceptable high sensitivity and a low FP rate

Journal ArticleDOI
TL;DR: Reports on the design and test of an image processing algorithm for the localization of the optic disk in low-resolution (about 20 /spl mu//pixel) color fundus images and a confidence level is associated to the final detection that indicates the "level of difficulty" the detector has to identify the OD position and shape.
Abstract: Reports on the design and test of an image processing algorithm for the localization of the optic disk (OD) in low-resolution (about 20 /spl mu//pixel) color fundus images The design relies on the combination of two procedures: 1) a Hausdorff-based template matching technique on edge map, guided by 2) a pyramidal decomposition for large scale object tracking The two approaches are tested against a database of 40 images of various visual quality and retinal pigmentation, as well as of normal and small pupils An average error of 7% on OD center positioning is reached with no false detection In addition, a confidence level is associated to the final detection that indicates the "level of difficulty" the detector has to identify the OD position and shape

Journal ArticleDOI
TL;DR: A new algorithm for the nonrigid registration of three-dimensional magnetic resonance (MR) intraoperative image sequences showing brain shift shows a good correlation of the internal brain structures after deformation, and a good capability of measuring surface as well as subsurface shift.
Abstract: We present a new algorithm for the nonrigid registration of three-dimensional magnetic resonance (MR) intraoperative image sequences showing brain shift. The algorithm tracks key surfaces of objects (cortical surface and the lateral ventricles) in the image sequence using a deformable surface matching algorithm. The volumetric deformation field of the objects is then inferred from the displacements at the boundary surfaces using a linear elastic biomechanical finite-element model. Two experiments on synthetic image sequences are presented, as well as an initial experiment on intraoperative MR images showing brain shift. The results of the registration algorithm show a good correlation of the internal brain structures after deformation, and a good capability of measuring surface as well as subsurface shift. We measured distances between landmarks in the deformed initial image and the corresponding landmarks in the target scan. Cortical surface shifts of up to 10 mm and subsurface shifts of up to 6 mm were recovered with an accuracy of 1 mm or less and 3 mm or less respectively.

Journal ArticleDOI
TL;DR: A novel multistage hybrid appearance model methodology is presented in which a hybrid active shape model/active appearance model (AAM) stage helps avoid local minima of the matching function to yield an overall more favorable matching result.
Abstract: A fully automated approach to segmentation of the left and right cardiac ventricles from magnetic resonance (MR) images is reported. A novel multistage hybrid appearance model methodology is presented in which a hybrid active shape model/active appearance model (AAM) stage helps avoid local minima of the matching function. This yields an overall more favorable matching result. An automated initialization method is introduced making the approach fully automated. The authors' method was trained in a set of 102 MR images and tested in a separate set of 60 images. In all testing cases, the matching resulted in a visually plausible and accurate mapping of the model to the image data. Average signed border positioning errors did not exceed 0.3 mm in any of the three determined contours-left-ventricular (LV) epicardium, LV and right-ventricular (RV) endocardium. The area measurements derived from the three contours correlated well with the independent standard (r=0.96, 0.96, 0.90), with slopes and intercepts of the regression lines close to one and zero, respectively. Testing the reproducibility of the method demonstrated an unbiased performance with small range of error as assessed via Bland-Altman statistic. In direct border positioning error comparison, the multistage method significantly outperformed the conventional AAM (p<0.001). The developed method promises to facilitate fully automated quantitative analysis of LV and RV morphology and function in clinical setting.

Journal ArticleDOI
TL;DR: An image-based technique to rigidly register intraoperative three-dimensional ultrasound (US) with preoperative magnetic resonance (MR) images by maximization of a similarity measure which generalizes the correlation ratio and whose novelty is to incorporate multivariate information from the MR data.
Abstract: Presents a new image-based technique to rigidly register intraoperative three-dimensional ultrasound (US) with preoperative magnetic resonance (MR) images. Automatic registration is achieved by maximization of a similarity measure which generalizes the correlation ratio, and whose novelty is to incorporate multivariate information from the MR data (intensity and gradient). In addition, the similarity measure is built upon a robust intensity-based distance measure, which makes it possible to handle a variety of US artifacts. A cross-validation study has been carried out using a number of phantom and clinical data. This indicates that the method is quite robust and that the worst registration errors are of the order of the MR image resolution.

Journal ArticleDOI
TL;DR: Qualitatively, the boundaries detected by the automated system generally agreed extremely well with the true retinal structure for the vast majority of OCT images, and a robust, quantitatively accurate system can be expected to improve patient care.
Abstract: Presents a system for detecting retinal boundaries in optical coherence tomography (OCT) B-scans. OCT is a relatively new imaging modality giving cross-sectional images that are qualitatively similar to ultrasound. However, the axial resolution with OCT is much higher, on the order of 10 /spl mu/m. Objective, quantitative measures of retinal thickness may be made from OCT images. Knowledge of retinal thickness is important in the evaluation and treatment of many ocular diseases. The boundary-detection system presented here uses a one-dimensional edge-detection kernel to yield edge primitives. These edge primitives are rated, selected, and organized to form a coherent boundary structure by use of a Markov model of retinal boundaries as detected by OCT. Qualitatively, the boundaries detected by the automated system generally agreed extremely well with the true retinal structure for the vast majority of OCT images. Only one of the 1450 evaluation images caused the algorithm to fail. A quantitative evaluation of the retinal boundaries was performed as well, using the clinical application of automatic retinal thickness determination. Retinal thickness measurements derived from the algorithm's results were compared with thickness measurements from manually corrected boundaries for 1450 test images. The algorithm's thickness measurements over a 1-mm region near the fovea differed from the corrected thickness measurements by less than 10 /spl mu/m for 74% of the images and by less than 25 /spl mu/m (10% of normal retinal thickness) for 98.4% of the images. These errors are near the machine's resolution limit and still well below clinical significance. Current, standard clinical practice involves a qualitative, visual assessment of retinal thickness. A robust, quantitatively accurate system such as the authors' can be expected to improve patient care.

Journal ArticleDOI
TL;DR: A novel model-based correction method is proposed, based on the assumption that an image corrupted by intensity inhomogeneity contains more information than the corresponding uncorrupted image, which proved to be effective, reliable, and computationally attractive.
Abstract: In this paper, the problem of retrospective correction of intensity inhomogeneity in magnetic resonance (MR) images is addressed. A novel model-based correction method is proposed, based on the assumption that an image corrupted by intensity inhomogeneity contains more information than the corresponding uncorrupted image. The image degradation process is described by a linear model, consisting of a multiplicative and an additive component which are modeled by a combination of smoothly varying basis functions. The degraded image is corrected by the inverse of the image degradation model. The parameters of this model are optimized such that the information of the corrected image is minimized while the global intensity statistic is preserved. The method was quantitatively evaluated and compared to other methods on a number of simulated and real MR images and proved to be effective, reliable, and computationally attractive. The method can be widely applied to different types of MR images because it solely uses the information that is naturally present in an image, without making assumptions on its spatial and intensity distribution. Besides, the method requires no preprocessing, parameter setting, nor user interaction. Consequently, the proposed method may be a valuable tool in MR image analysis.

Journal ArticleDOI
TL;DR: A method for the detection of masses in mammographic images that employs Gaussian smoothing and subsampling operations as preprocessing steps and methods for analyzing oriented flow-like textural information in mammograms are proposed.
Abstract: We propose a method for the detection of masses in mammographic images that employs Gaussian smoothing and subsampling operations as preprocessing steps. The mass portions are segmented by establishing intensity links from the central portions of masses into the surrounding areas. We introduce methods for analyzing oriented flow-like textural information in mammograms. Features based on flow orientation in adaptive ribbons of pixels across the margins of masses are proposed to classify the regions detected as true mass regions or false-positives (FPs). The methods yielded a mass versus normal tissue classification accuracy represented as an area (A/sub z/) of 0.87 under the receiver operating characteristics (ROCs) curve with a dataset of 56 images including 30 benign disease, 13 malignant disease, and 13 normal cases selected from the mini Mammographic Image Analysis Society database. A sensitivity of 81% was achieved at 2.2 FPs/image. Malignant tumor versus normal tissue classification resulted in a higher A/sub z/ value of 0.9 under the ROC curve using only the 13 malignant and 13 normal cases with a sensitivity of 85% at 2.45 FPs/image. The mass detection algorithm could detect all the 13 malignant tumors successfully, but achieved a success rate of only 63% (19/30) in detecting the benign masses. The mass regions that were successfully segmented were further classified as benign or malignant disease by computing five texture features based on gray-level co-occurrence matrices (GCMs) and using the features in a logistic regression method. The features were computed using adaptive ribbons of pixels across the boundaries of the masses. Benign versus malignant classification using the GCM-based texture features resulted in A/sub z/=0.79 with 19 benign and 13 malignant cases.

Journal ArticleDOI
TL;DR: A biomechanical model of the breast is presented using a finite element (FE) formulation andphasis is given to the modeling of breast tissue deformation which takes place in breast imaging procedures.
Abstract: Breast tissue deformation modeling has recently gained considerable interest in various medical applications. A biomechanical model of the breast is presented using a finite element (FE) formulation. Emphasis is given to the modeling of breast tissue deformation which takes place in breast imaging procedures. The first step in implementing the FE modeling (FEM) procedure is mesh generation. For objects with irregular and complex geometries such as the breast, this step is one of the most difficult and tedious tasks. For FE mesh generation, two automated methods are presented which process MRI breast images to create a patient-specific mesh. The main components of the breast are adipose, fibroglandular and skin tissues. For modeling the adipose and fibroglandular tissues, we used eight noded hexahedral elements with hyperelastic properties, while for the skin, we chose four noded hyperelastic membrane elements. For model validation, an MR image of an agarose phantom was acquired and corresponding FE meshes were created. Based on assigned elasticity parameters, a numerical experiment was performed using the FE meshes, and good results were obtained. The model was also applied to a breast image registration problem of a volunteer's breast. Although qualitatively reasonable, further work is required to validate the results quantitatively.

Journal ArticleDOI
TL;DR: The authors argue that their intensity modeling may be more appropriate than mutual information (MI) in the context of evaluating high-dimensional deformations, as it puts more constraints on the parameters to be estimated and, thus, permits a better search of the parameter space.
Abstract: This paper presents an original method for three-dimensional elastic registration of multimodal images. The authors propose to make use of a scheme that iterates between correcting for intensity differences between images and performing standard monomodal registration. The core of the authors' contribution resides in providing a method that finds the transformation that maps the intensities of one image to those of another. It makes the assumption that there are at most two functional dependencies between the intensities of structures present in the images to register, and relies on robust estimation techniques to evaluate these functions. The authors provide results showing successful registration between several imaging modalities involving segmentations, T1 magnetic resonance (MR), T2 MR, proton density (PD) MR and computed tomography (CT). The authors also argue that their intensity modeling may be more appropriate than mutual information (MI) in the context of evaluating high-dimensional deformations, as it puts more constraints on the parameters to be estimated and, thus, permits a better search of the parameter space.


Journal ArticleDOI
TL;DR: A novel algorithm is presented that analyzes and constrains the topology of a volumetric object and localizes the change to a volume to the specific areas of its topological defects.
Abstract: The human cerebral cortex is topologically equivalent to a sheet and can be considered topologically spherical if it is closed at the brainstem. Low-level segmentation of magnetic resonance (MR) imagery typically produces cerebral volumes whose tessellations are not topologically spherical. The authors present a novel algorithm that analyzes and constrains the topology of a volumetric object. Graphs are formed that represent the connectivity of voxel segments in the foreground and background of the image. These graphs are analyzed and minimal corrections to the volume are made prior to tessellation. The authors apply the algorithm to a simple test object and to cerebral white matter masks generated by a low-level tissue identification sequence. The authors tessellate the resulting objects using the marching cubes algorithm and verify their topology by computing their Euler characteristics. A key benefit of the algorithm is that it localizes the change to a volume to the specific areas of its topological defects.

Journal ArticleDOI
TL;DR: A three-dimensional model-based method is presented to estimate skeletal motion of the knee from high-speed sequences of biplane radiographs, which implicitly assumes that geometrical features cannot be detected reliably and an exact segmentation of bone edges is not always feasible.
Abstract: Current noninvasive or minimally invasive methods for evaluating in vivo knee kinematics are inadequate for accurate determination of dynamic joint function due to limited accuracy and/or insufficient sampling rates. A three-dimensional (3-D) model-based method is presented to estimate skeletal motion of the knee from high-speed sequences of biplane radiographs. The method implicitly assumes that geometrical features cannot be detected reliably and an exact segmentation of bone edges is not always feasible. An existing biplane radiograph system was simulated as two separate single-plane radiograph systems. Position and orientation of the underlying bone was determined for each single-plane view by generating projections through a 3-D volumetric model (from computed tomography), and producing an image (digitally reconstructed radiograph) similar (based on texture information and rough edges of bone) to the two-dimensional radiographs. The absolute 3-D pose was determined using known imaging geometry of the biplane radiograph system and a 3-D line intersection method. Results were compared to data of known accuracy, obtained from a previously established bone-implanted marker method. Difference of controlled in vitro tests was on the order of 0.5 mm for translation and 1.4/spl deg/ for rotation. A biplane radiograph sequence of a canine hindlimb during treadmill walking was used for in vivo testing, with differences on the order of 0.8 mm for translation and 2.5/spl deg/ for rotation.

Journal ArticleDOI
TL;DR: The method described here corrects for geometrical distortion related to B/sub 0/ inhomogeneity, gradient eddy currents, radio-frequency pulse frequency offset, and chemical shift effect.
Abstract: A computationally efficient technique is described for the simultaneous removal of ghosting and geometrical distortion artifacts in echo-planar imaging (EPI) utilizing a multiecho, gradient-echo reference scan. Nyquist ghosts occur in EPI reconstructions because odd and even lines of k-space are acquired with opposite polarity, and experimental imperfections such as gradient eddy currents, imperfect pulse sequence timing, B/sub 0/ field inhomogeneity, susceptibility, and chemical shift result in the even and odd lines of k-space being offset by different amounts relative to the true center of the acquisition window. Geometrical distortion occurs due to the limited bandwidth of the EPI images in the phase-encode direction. This distortion can be problematic when attempting to overlay an activation map from a functional magnetic resonance imaging experiment generated from EPI data on a high-resolution anatomical image. The method described here corrects for geometrical distortion related to B/sub 0/ inhomogeneity, gradient eddy currents, radio-frequency pulse frequency offset, and chemical shift effect. The algorithm for removing ghost artifacts utilizes phase information in two dimensions and is, thus, more robust than conventional one-dimensional methods. An additional reference scan is required which takes approximately 2 min for a matrix size of 64/spl times/64 and a repetition time of 2 s. Results from a water phantom and a human brain at 3 T demonstrate the effectiveness of the method for removing ghosts and geometric distortion artifacts.

Journal ArticleDOI
TL;DR: Using a new digital hand atlas an image analysis methodology is being developed to assist radiologists in bone age estimation and describe the stage of skeletal development more objectively than visual comparison.
Abstract: Clinical assessment of skeletal maturity is based on a visual comparison of a left-hand wrist radiograph with atlas patterns. Using a new digital hand atlas an image analysis methodology is being developed. To assist radiologists in bone age estimation. The analysis starts with a preprocessing function yielding epiphyseal/metaphyseal regions of interest (EMROIs). Then, these regions are subjected to a feature extraction function. Accuracy has been measured independently at three stages of the image analysis: detection of phalangeal tip, extraction of the EMROIs, and location of diameters and lower edge of the EMROIs. Extracted features describe the stage of skeletal development more objectively than visual comparison.

Journal ArticleDOI
TL;DR: A new method for computer-aided detection of polyps in computed tomography (CT) colonography (virtual colonoscopy), a technique in which polyps are imaged along the wall of the air-inflated, cleansed colon with X-ray CT, which combines the information from many random images to generate reliable signatures of shape.
Abstract: Adenomatous polyps in the colon are believed to be the precursor to colorectal carcinoma, the second leading cause of cancer deaths in United States In this paper, we propose a new method for computer-aided detection of polyps in computed tomography (CT) colonography (virtual colonoscopy), a technique in which polyps are imaged along the wall of the air-inflated, cleansed colon with X-ray CT Initial work with computer aided detection has shown high sensitivity, but at a cost of too many false positives We present a statistical approach that uses support vector machines to distinguish the differentiating characteristics of polyps and healthy tissue, and uses this information for the classification of the new cases One of the main contributions of the paper is the new three-dimensional pattern processing approach, called random orthogonal shape sections method, which combines the information from many random images to generate reliable signatures of shape The input to the proposed system is a collection of volume data from candidate polyps obtained by a high-sensitivity, low-specificity system that we developed previously The results of our tenfold cross-validation experiments show that, on the average, the system increases the specificity from 019 (035) to 069 (074) at a sensitivity level of 10 (095)