scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 1998"


Journal Article•DOI•
TL;DR: A novel approach to correcting for intensity nonuniformity in magnetic resonance (MR) data is described that achieves high performance without requiring a model of the tissue classes present, and is applied at an early stage in an automated data analysis, before a tissue model is available.
Abstract: A novel approach to correcting for intensity nonuniformity in magnetic resonance (MR) data is described that achieves high performance without requiring a model of the tissue classes present. The method has the advantage that it can be applied at an early stage in an automated data analysis, before a tissue model is available. Described as nonparametric nonuniform intensity normalization (N3), the method is independent of pulse sequence and insensitive to pathological data that might otherwise violate model assumptions. To eliminate the dependence of the field estimate on anatomy, an iterative approach is employed to estimate both the multiplicative bias field and the distribution of the true tissue intensities. The performance of this method is evaluated using both real and simulated MR data.

4,613 citations


Journal Article•DOI•
TL;DR: The authors present a realistic, high-resolution, digital, volumetric phantom of the human brain, which can be used to simulate tomographic images of the head and is the ideal tool to test intermodality registration algorithms.
Abstract: After conception and implementation of any new medical image processing algorithm, validation is an important step to ensure that the procedure fulfils all requirements set forth at the initial design stage. Although the algorithm must be evaluated on real data, a comprehensive validation requires the additional use of simulated data since it is impossible to establish ground truth with in vivo data. Experiments with simulated data permit controlled evaluation over a wide range of conditions (e.g., different levels of noise, contrast, intensity artefacts, or geometric distortion). Such considerations have become increasingly important with the rapid growth of neuroimaging, i.e., computational analysis of brain structure and function using brain scanning methods such as positron emission tomography and magnetic resonance imaging. Since simple objects such as ellipsoids or parallelepipedes do not reflect the complexity of natural brain anatomy, the authors present the design and creation of a realistic, high-resolution, digital, volumetric phantom of the human brain. This three-dimensional digital brain phantom is made up of ten volumetric data sets that define the spatial distribution for different tissues (e.g., grey matter, white matter, muscle, skin, etc.), where voxel intensity is proportional to the fraction of tissue within the voxel. The digital brain phantom can be used to simulate tomographic images of the head. Since the contribution of each tissue type to each voxel in the brain phantom is known, it can be used as the gold standard to test analysis algorithms such as classification procedures which seek to identify the tissue "type" of each image voxel. Furthermore, since the same anatomical phantom may be used to drive simulators for different modalities, it is the ideal tool to test intermodality registration algorithms. The brain phantom and simulated MR images have been made publicly available on the Internet (http://www.bic.mni.mcgill.ca/brainweb).

1,811 citations


Journal Article•DOI•
TL;DR: Two new expressions for estimating registration accuracy of point-based guidance systems and a surprising conclusion that expected registration accuracy (TRE) is worst near the fiducials that are most closely aligned are presented.
Abstract: Guidance systems designed for neurosurgery, hip surgery, and spine surgery, and for approaches to other anatomy that is relatively rigid can use rigid-body transformations to accomplish image registration. These systems often rely on point-based registration to determine the transformation, and many such systems use attached fiducial markers to establish accurate fiducial points for the registration, the points being established by some fiducial localization process. Accuracy is important to these systems, as is knowledge of the level of that accuracy. An advantage of marker-based systems, particularly those in which the markers are bone-implanted, is that registration error depends only on the fiducial localization error (FLE) and is thus to a large extent independent of the particular object being registered. Thus, it should be possible to predict the clinical accuracy of marker-based systems on the basis of experimental measurements made with phantoms or previous patients. This paper presents two new expressions for estimating registration accuracy of such systems and points out a danger in using a traditional measure of registration accuracy. The new expressions represent fundamental theoretical results with regard to the relationship between localization error and registration error in rigid-body, point-based registration. Rigid-body, point-based registration is achieved by finding the rigid transformation that minimizes "fiducial registration error" (FRE), which is the root mean square distance between homologous fiducials after registration. Closed form solutions have been known since 1966. The expected value (FRE/sup 2/) depends on the number N of fiducials and expected squared value of FLE, (FLE/sup 2/), but in 1979 it was shown that (FRE/sup 2/) is approximately independent of the fiducial configuration C. The importance of this surprising result seems not yet to have been appreciated by the registration community: Poor registrations caused by poor fiducial configurations may appear to be good due to a small FRE value. A more critical and direct measure of registration error is the "target registration error" (TRE), which is the distance between homologous points other than the centroids of fiducials. Efforts to characterize its behavior have been made since 1989. Published numerical simulations have shown that (TRE/sup 2/) is roughly proportional to (FLE/sup 2/)/N and, unlike (FRE/sup 2/), does depend in some way on C. Thus, FRE, which is often used as feedback to the surgeon using a point-based guidance system, is in fact an unreliable indicator of registration-accuracy. In this work the authors derive approximate expressions for (TRE/sup 2/), and for the expected squared alignment error of an individual fiducial. They validate both approximations through numerical simulations. The former expression can be used to provide reliable feedback to the surgeon during surgery and to guide the placement of markers before surgery, or at least to warn the surgeon of potentially dangerous fiducial placements; the latter expression leads to a surprising conclusion: Expected registration accuracy (TRE) is worst near the fiducials that are most closely aligned! This revelation should be of particular concern to surgeons who may at present be relying on fiducial alignment as an indicator of the accuracy of their point-based guidance systems.

1,055 citations


Journal Article•DOI•
TL;DR: Results show that the introduction of soft-tissue structures and interventional instruments into the phantom image can have a large effect on the performance of some similarity measures previously applied to 2-D-3-D image registration.
Abstract: A comparison of six similarity measures for use in intensity-based two-dimensional-three-dimensional (2-D-3-D) image registration is presented. The accuracy of the similarity measures are compared to a "gold-standard" registration which has been accurately calculated using fiducial markers. The similarity measures are used to register a computed tomography (CT) scan of a spine phantom to a fluoroscopy image of the phantom. The registration is carried out within a region-of-interest in the fluoroscopy image which is user defined to contain a single vertebra. Many of the problems involved in this type of registration are caused by features which were not modeled by a phantom image alone. More realistic "gold-standard" data sets were simulated using the phantom image with clinical image features overlaid. Results show that the introduction of soft-tissue structures and interventional instruments into the phantom image can have a large effect on the performance of some similarity measures previously applied to 2-D-3-D image registration. Two measures were able to register accurately and robustly even when soft-tissue structures and interventional instruments were present as differences between the images. These measures were pattern intensity and gradient difference. Their registration accuracy, for all the rigid-body parameters except for the source to film translation, was within a root-mean-square (rms) error of 0.53 mm or degrees to the "gold-standard" values. No failures occurred while registering using these measures.

912 citations


Journal Article•DOI•
TL;DR: The authors propose an approach to the construction of the regularization matrix that conforms to the prior assumptions on the impedance distribution based on theConstruction of an approximating subspace for the expected impedance distributions.
Abstract: The solution of impedance distribution in electrical impedance tomography is a nonlinear inverse problem that requires the use of a regularization method. The generalized Tikhonov regularization methods have been popular in the solution of many inverse problems. The regularization matrices that are usually used with the Tikhonov method are more or less ad hoc and the implicit prior assumptions are, thus, in many cases inappropriate. In this paper, the authors propose an approach to the construction of the regularization matrix that conforms to the prior assumptions on the impedance distribution. The approach is based on the construction of an approximating subspace for the expected impedance distributions. It is shown by simulations that the reconstructions obtained with the proposed method are better than with two other schemes of the same type when the prior is compatible with the true object. On the other hand, when the prior is incompatible with the true object, the method will still give reasonable estimates.

530 citations


Journal Article•DOI•
B.F. Jones1•
TL;DR: While the use of infrared imaging is increasing in many industrial and security applications, it has declined in medicine probably because of the continued reliance on first generation cameras.
Abstract: Infrared thermal imaging of the skin has been used for several decades to monitor the temperature distribution of human skin. Abnormalities such as malignancies, inflammation, and infection cause localized increases in temperature which show as hot spots or as asymmetrical patterns in an infrared thermogram. Even though it is nonspecific, infrared thermology is a powerful detector of problems that affect a patient's physiology. While the use of infrared imaging is increasing in many industrial and security applications, it has declined in medicine probably because of the continued reliance on first generation cameras. The transfer of military technology for medical use has prompted this reappraisal of infrared thermology in medicine. Digital infrared cameras have much improved spatial and thermal resolutions, and libraries of image processing routines are available to analyze images captured both statically and dynamically. If thermographs are captured under controlled conditions, they may be interpreted readily to diagnose certain conditions and to monitor the reaction of a patient's physiology to thermal and other stresses. Some of the major areas where infrared thermography is being used successfully are neurology, vascular disorders, rheumatic diseases, tissue viability, oncology (especially breast cancer), dermatological disorders, neonatal, ophthalmology, and surgery.

512 citations


Journal Article•DOI•
TL;DR: A system that automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images (MRIs) of the human brain is presented and generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time.
Abstract: A system that automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images (MRIs) of the human brain is presented. The MRIs consist of T1-weighted, proton density, and T2-weighted feature images and are processed by a system which integrates knowledge-based (KB) techniques with multispectral analysis. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intracranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intracranial region, with region analysis used in performing the final tumor labeling. This system has been trained on three volume data sets and tested on thirteen unseen volume data sets acquired from a single MRI system. The KB tumor segmentation was compared with supervised, radiologist-labeled "ground truth" tumor volumes and supervised K-nearest neighbors tumor segmentations. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time.

507 citations


Journal Article•DOI•
TL;DR: A robust fully automatic method for segmenting the brain from head magnetic resonance (MR) images has been developed, which works even in the presence of radio frequency (RF) inhomogeneities.
Abstract: A robust fully automatic method for segmenting the brain from head magnetic resonance (MR) images has been developed, which works even in the presence of radio frequency (RF) inhomogeneities. It has been successful in segmenting the brain in every slice from head images acquired from several different MRI scanners, using different-resolution images and different echo sequences. The method uses an integrated approach which employs image processing techniques based on anisotropic filters and "snakes" contouring techniques, and a priori knowledge, which is used to remove the eyes, which are tricky to remove based on image intensity alone. It is a multistage process, involving first removal of the background noise leaving a head mask, then finding a rough outline of the brain, then refinement of the rough brain outline to a final mask. The paper describes the main features of the method, and gives results for some brain studies.

427 citations


Journal Article•DOI•
TL;DR: In contrast to previously proposed methods, ML estimation is demonstrated to be unbiased for high signal-to-noise ratio (SNR) and to yield physical relevant results for low SNR.
Abstract: The problem of parameter estimation from Rician distributed data (e.g., magnitude magnetic resonance images) is addressed. The properties of conventional estimation methods are discussed and compared to maximum-likelihood (ML) estimation which is known to yield optimal results asymptotically. In contrast to previously proposed methods, ML estimation is demonstrated to be unbiased for high signal-to-noise ratio (SNR) and to yield physical relevant results for low SNR.

418 citations


Journal Article•DOI•
TL;DR: The proposed method overcomes the problems of initialization and vessel profile modeling that are encountered in the literature and automatically tracks fundus vessels using linguistic descriptions like "vessel" and "nonvessel."
Abstract: In this paper the authors present a new unsupervised fuzzy algorithm for vessel tracking that is applied to the detection of the ocular fundus vessels. The proposed method overcomes the problems of initialization and vessel profile modeling that are encountered in the literature and automatically tracks fundus vessels using linguistic descriptions like "vessel" and "nonvessel." The main tool for determining vessel and nonvessel regions along a vessel profile is the fuzzy C-means clustering algorithm that is fed with properly preprocessed data, Additional procedures for checking the validity of the detected vessels and handling junctions and forks are also presented. The application of the proposed algorithm to fundus images and simulated vessels resulted in very good overall performance and consistent estimation of vessel parameters.

399 citations


Journal Article•DOI•
TL;DR: The authors show that the algorithm presented is capable of not only reducing speckle, but also enhancing features of diagnostic importance, such as myocardial walls in two-dimensional echocardiograms obtained from the parasternal short-axis view.
Abstract: This paper presents an algorithm for speckle reduction and contrast enhancement of echocardiographic images. Within a framework of multiscale wavelet analysis, the authors apply wavelet shrinkage techniques to eliminate noise while preserving the sharpness of salient features. In addition, nonlinear processing of feature energy is carried out to enhance contrast within local structures and along object boundaries. The authors show that the algorithm is capable of not only reducing speckle, but also enhancing features of diagnostic importance, such as myocardial walls in two-dimensional echocardiograms obtained from the parasternal short-axis view. Shrinkage of wavelet coefficients via soft thresholding within finer levels of scale is carried out on coefficients of logarithmically transformed echocardiograms. Enhancement of echocardiographic features is accomplished via nonlinear stretching followed by hard thresholding of wavelet coefficients within selected (midrange) spatial-frequency levels of analysis. The authors formulate the denoising and enhancement problem, introduce a class of dyadic wavelets, and describe their implementation of a dyadic wavelet transform. Their approach for speckle reduction and contrast enhancement was shown to be less affected by pseudo-Gibbs phenomena. The authors show experimentally that this technique produced superior results both qualitatively and quantitatively when compared to results obtained from existing denoising methods alone. A study using a database of clinical echocardiographic images suggests that such denoising and enhancement may improve the overall consistency of expert observers to manually defined borders.

Journal Article•DOI•
TL;DR: This paper formulates a corresponding expectation-maximization (EM) algorithm, as well as a method for estimating noise properties at the ML estimate, for an idealized two-dimensional positron emission tomography [2-D PET] detector.
Abstract: Using a theory of list-mode maximum-likelihood (ML) source reconstruction presented recently by Barrett et al. (1997), this paper formulates a corresponding expectation-maximization (EM) algorithm, as well as a method for estimating noise properties at the ML estimate. List-mode ML is of interest in cases where the dimensionality of the measurement space impedes a binning of the measurement data. It can be advantageous in cases where a better forward model can be obtained by including more measurement coordinates provided by a given detector. Different figures of merit for the detector performance can be computed from the Fisher information matrix (FIM). This paper uses the observed FIM, which requires a single data set, thus, avoiding costly ensemble statistics. The proposed techniques are demonstrated for an idealized two-dimensional (2-D) positron emission tomography (PET) [2-D PET] detector. The authors compute from simulation data the improved image quality obtained by including the time of flight of the coincident quanta.

Journal Article•DOI•
TL;DR: A preoperative planning system for oral implant surgery was developed which takes as input computed tomographies of the jaws, and a technique is developed for scanning and visualizing an eventual existing removable prosthesis together with the bone structures.
Abstract: A preoperative planning system for oral implant surgery was developed which takes as input computed tomographies (CT's) of the jaws Two-dimensional (2-D) reslices of these axial CT slices orthogonal to a curve following the jaw arch are computed and shown together with three-dimensional (3-D) surface rendered models of the bone and computer-aided design (CAD)-like implant models A technique is developed for scanning and visualizing an eventual existing removable prosthesis together with the bone structures Evaluation of the planning done with the system shows a difference between 2-D and 3-D planning methods Validation studies measure the benefits of the 3-D approach by comparing plans made in 2-D mode only with those further adjusted using the full 3-D visualization capabilities of the system The benefits of a 3-D approach are then evident where a prosthesis is involved in the planning For the majority of the patients, clinically important adjustments and optimizations to the 2-D plans are made once the 3-D visualization is enabled, effectively resulting in a better plan The alterations are related to bone quality and quantity (p<005), biomechanics (p<0005), and esthetics (p<0005), and are so obvious that the 3-D plan stands out clearly (p<0005) The improvements often avoid complications such as mandibular nerve damage, sinus perforations, fenestrations, or dehiscences

Journal Article•DOI•
TL;DR: A technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects to enable segmentation and tracking of cardiac structures in ultrasound image sequences is presented.
Abstract: This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.

Journal Article•DOI•
TL;DR: Two novel lesion segmentation techniques are developed based on a single feature called the radial gradient index (RGI) and one based on simple probabilistic models to segment mass lesions, or other similar nodular structures, from surrounding background.
Abstract: Segmenting lesions is a vital step in many computerized mass-detection schemes for digital (or digitized) mammograms. The authors have developed two novel lesion segmentation techniques-one based on a single feature called the radial gradient index (RGI) and one based on simple probabilistic models to segment mass lesions, or other similar nodular structures, from surrounding background. In both methods a series of image partitions is created using gray-level information as well as prior knowledge of the shape of typical mass lesions. With the former method the partition that maximizes the RGI is selected. In the latter method, probability distributions for gray-levels inside and outside the partitions are estimated, and subsequently used to determine the probability that the image occurred for each given partition. The partition that maximizes this probability is selected as the final lesion partition (contour). The authors tested these methods against a conventional region growing algorithm using a database of biopsy-proven, malignant lesions and found that the new lesion segmentation algorithms more closely match radiologists' outlines of these lesions. At an overlap threshold of 0.30, gray level region growing correctly delineates 62% of the lesions in the authors' database while the RGI and probabilistic segmentation algorithms correctly segment 92% and 96% of the lesions, respectively.

Journal Article•DOI•
TL;DR: Preliminary experiments indicate that further studies are needed to investigate the potential of wavelet-based subband image decomposition as a tool for detecting microcalcifications in digital mammograms.
Abstract: This paper presents an approach for detecting micro-calcifications in digital mammograms employing wavelet-based subband image decomposition. The microcalcifications appear in small clusters of few pixels with relatively high intensity compared with their neighboring pixels. These image features can be preserved by a detection system that employs a suitable image transform which can localize the signal characteristics in the original and the transform domain. Given that the microcalcifications correspond to high-frequency components of the image spectrum, detection of microcalcifications is achieved by decomposing the mammograms into different frequency subbands, suppressing the low-frequency subband, and, finally, reconstructing the mammogram from the subbands containing only high frequencies. Preliminary experiments indicate that further studies are needed to investigate the potential of wavelet-based subband image decomposition as a tool for detecting microcalcifications in digital mammograms.

Journal Article•DOI•
TL;DR: All necessary image analysis steps for map generation are described and a prototype software system for fully automatic map generation has been implemented and tested on a representative dataset selected from a clinical study with 50 patients.
Abstract: The new therapeutic method of scotoma-based photocoagulation (SBP) developed at the Vienna Eye Clinic for diagnosis and treatment of age-related macular degeneration requires retinal maps from scanning laser ophthalmoscope images. This paper describes in detail all necessary image analysis steps for map generation. A prototype software system for fully automatic map generation has been implemented and tested on a representative dataset selected from a clinical study with 50 patients. The map required for the SBP treatment can be reliably extracted in all cases. Thus, algorithms presented in this paper should be directly applicable in daily clinical routine without major modifications.

Journal Article•DOI•
TL;DR: The authors' results show that mean-based filtering is consistently more effective than median-based algorithms for removing inhomogeneities in MR images, and that artifacts are frequently introduced into images at the most commonly used window sizes.
Abstract: Grayscale inhomogeneities in magnetic resonance (MR) images confound quantitative analysis of these images. Homomorphic unsharp masking and its variations have been commonly used as a post-processing method to remove inhomogeneities in MR images, However, little data is available in the literature assessing the relative effectiveness of these algorithms to remove inhomogeneities, or describing how these algorithms can affect image data. In this study, the authors address these questions quantitatively using simulated images with artificially constructed and empirically measured bias fields. The authors' results show that mean-based filtering is consistently more effective than median-based algorithms for removing inhomogeneities in MR images, and that artifacts are frequently introduced into images at the most commonly used window sizes. The authors' results demonstrate dramatic improvement in the effectiveness of the algorithms with significantly larger windows than are commonly used.

Journal Article•DOI•
TL;DR: A novel method for fully automated segmentation that is based on description of shape and its variation using point distribution models (PDM's) and incorporates a priori knowledge about shapes of the neuroanatomic structures to provide their robust segmentation and labeling in magnetic resonance (MR) brain images.
Abstract: This paper reports a novel method for fully automated segmentation that is based on description of shape and its variation using point distribution models (PDM's). An improvement of the active shape procedure introduced by Cootes and Taylor (1997) to find new examples of previously learned shapes using PDM's is presented. The new method for segmentation and interpretation of deep neuroanatomic structures such as thalamus, putamen, ventricular system, etc. incorporates a priori knowledge about shapes of the neuroanatomic structures to provide their robust segmentation and labeling in magnetic resonance (MR) brain images. The method was trained in eight MR brain images and tested in 19 brain images by comparison to observer-defined independent standards. Neuroanatomic structures in all testing images were successfully identified. Computer-identified and observer-defined neuroanatomic structures agreed well. The average labeling error was 7%/spl plusmn/3%. Border positioning errors were quite small, with the average border positioning error of 0.8/spl plusmn/0.1 pixels in 256/spl times/256 MR images. The presented method was specifically developed for segmentation of neuroanatomic structures in MR brain images. However, it is generally applicable to virtually any task involving deformable shape analysis.

Journal Article•DOI•
TL;DR: The essential idea of the proposed approach is to apply a fuzzified image of a mammogram to locate the suspicious regions and to interact the fuzzification image with the original image to preserve fidelity.
Abstract: Breast cancer continues to be a significant public health problem in the United States. Approximately, 182,000 new cases of breast cancer are diagnosed and 46,000 women die of breast cancer each year. Even more disturbing is the fact that one out of eight women in the United States will develop breast cancer at some point during her lifetime. Since the cause of breast cancer remains unknown, primary prevention becomes impossible. Computer-aided mammography is an important and challenging task in automated diagnosis. It has great potential over traditional interpretation of film-screen mammography in terms of efficiency and accuracy. Microcalcifications are the earliest sign of breast carcinomas and their detection is one of the key issues for breast cancer control. In this study, a novel approach to microcalcification detection based on fuzzy logic technique is presented. Microcalcifications are first enhanced based on their brightness and nonuniformity. Then, the irrelevant breast structures are excluded by a curve detector. Finally, microcalcifications are located using an iterative threshold selection method. The shapes of microcalcifications are reconstructed and the isolated pixels are removed by employing the mathematical morphology technique. The essential idea of the proposed approach is to apply a fuzzified image of a mammogram to locate the suspicious regions and to interact the fuzzified image with the original image to preserve fidelity. The major advantage of the proposed method is its ability to detect microcalcifications even in very dense breast mammograms. A series of clinical mammograms are employed to test the proposed algorithm and the performance is evaluated by the free-response receiver operating characteristic curve. The experiments aptly show that the microcalcifications can be accurately detected even in very dense mammograms using the proposed approach.

Journal Article•DOI•
TL;DR: It is shown that conventional ACE's use linear functions to compute the new CG's, but the proposed nonlinear function produces an adequate CG resulting in little noise overenhancement and fewer ringing artifacts.
Abstract: The adaptive contrast enhancement (ACE) algorithm, which uses contrast gains (CGs) to adjust the high-frequency components of images, is a well-known technique for medical image processing. Conventionally, the CG is either a constant or inversely proportional to the local standard deviation (LSD). However, it is known that conventional approaches entail noise overenhancement and ringing artifacts. In this paper, the authors present a new ACE algorithm that eliminates these problems. First, a mathematical model for the LSD distribution is proposed by extending Hunt's (1976) image model. Then, the CG is formulated as a function of the LSD. The function, which is nonlinear, is determined by the transformation between the LSD histogram and a desired LSD distribution. Using the authors' formulation, it can be shown that conventional ACEs use linear functions to compute the new CGs. It is the proposed nonlinear function that produces an adequate CG resulting in little noise overenhancement and fewer ringing artifacts. Finally, simulations using some X-ray images are provided to demonstrate the effectiveness of the the authors' new algorithm.

Journal Article•DOI•
TL;DR: The authors investigate intraoperative brain deformation by examining threshold boundary overlays and difference images and by measuring ventricular volume and present preliminary results obtained using a nonrigid registration algorithm to quantify deformation.
Abstract: All image-guided neurosurgical systems that the authors are aware of assume that the head and its contents behave as a rigid body. It is important to measure intraoperative brain deformation (brain shift) to provide some indication of the application accuracy of image-guided surgical systems, and also to provide data to develop and validate nonrigid registration algorithms to correct for such deformation. The authors are collecting data from patients undergoing neurosurgery in a high-field (1.5 T) interventional magnetic resonance (MR) scanner. High-contrast and high-resolution gradient-echo MR image volumes are collected immediately prior to surgery, during surgery, and at the end of surgery, with the patient intubated and lying on the operating table in the operative position. Here, the authors report initial results from six patients: one freehand biopsy, one stereotactic functional procedure, and four resections. The authors investigate intraoperative brain deformation by examining threshold boundary overlays and difference images and by measuring ventricular volume. They also present preliminary results obtained using a nonrigid registration algorithm to quantify deformation. They found that some cases had much greater deformation than others, and also that, regardless of the procedure, there was very little deformation of the midline, the tentorium, the hemisphere contralateral to the procedure, and ipsilateral structures except those that are within 1 cm of the lesion or are gravitationally above the surgical site.

Journal Article•DOI•
TL;DR: The WGF algorithm, which not only allows the combination of multiple types of geometrical information but also handles point-based and surface-based registration as degenerate cases, could form the foundation of a "flexible" surgical navigation system that allows the surgeon to use what he considers the method most appropriate for an individual clinical situation.
Abstract: Most previously reported registration techniques that align three-dimensional image volumes by matching geometrical features such as points or surfaces use a single type of feature. The authors recently reported a hybrid registration technique that uses a weighted combination of multiple geometrical feature shapes. In this study they use the weighted geometrical feature (WGF) algorithm to register computed tomography (CT) images of the head to physical space using the skin surface only, the bone surface only, and various weighted combinations of these surfaces and one fiducial point (centroid of a bone-implanted marker). The authors use data acquired from 12 patients that underwent temporal lobe craniotomies for the resection of cerebral lesions. The authors evaluate and compare the accuracy of the registrations obtained using these various approaches by using as a reference gold standard the registration obtained using three bone-implanted markers. The results demonstrate that a combination of geometrical features can improve the accuracy of CT-to-physical space registration. Point-based registration requires a minimum of three noncollinear points. The position of a bone-implanted marker can be determined much more accurately than that of a skin-affixed marker or an anatomic landmark. A major disadvantage of using bone-implanted markers is that an invasive procedure is required to implant each marker. By combining surface information, the WGF algorithm allows registration to be performed using only one or two such markers. One important finding is that the use of a single, very accurate point (a bone-implanted marker) allows very accurate surface-based registration to be achieved using very few surface points. Finally, the WGF algorithm, which not only allows the combination of multiple types of geometrical information but also handles point-based and surface-based registration as degenerate cases, could form the foundation of a "flexible" surgical navigation system that allows the surgeon to use what he considers the method most appropriate for an individual clinical situation.

Journal Article•DOI•
TL;DR: The authors have found the calibration to be stable after re-registration of the sensors under varying conditions such as different heights of the OR table and varying positions of the Or equipment over a longer time interval to encourage the further development of a hybrid magnetooptical tracker for computer-aided surgery.
Abstract: The purpose of this paper was to assess to what extent an optical tracking system (OTS) used for position determination in computer-aided surgery (CAS) can be enhanced by combining it with a direct current (DC) driven electromagnetic tracking system (EMTS). The main advantage of the EMTS is the fact that it is not dependent on a free line-of-sight. Unfortunately, the accuracy of the EMTS is highly affected by nearby ferromagnetic materials. The authors have explored to what extent the influence of the metallic equipment in the operating room (OR) can be compensated by collecting precise information on the nonlinear local error in the EMTS by using the OTS for setting up a calibration look-up table. After calibration of the EMTS and registration of the sensor systems in the OR the authors have found the average euclidean deviation in position readings between the DC tracker and the OTS reduced from 2.9/spl plusmn/1.0 mm to 2.1/spl plusmn/0.8 mm within a half-sphere of 530-mm radius around the magnetic field emitter. Furthermore the authors have found the calibration to be stable after re-registration of the sensors under varying conditions such as different heights of the OR table and varying positions of the OR equipment over a longer time interval. These results encourage the further development of a hybrid magnetooptical tracker for computer-aided surgery where the electromagnetic tracker acts as an auxiliary source of position information for the optical system. Strategies for enhancing the reliability of the proposed hybrid magnetooptic tracker by detecting artifacts induced by mobile ferromagnetic objects such as surgical tools are discussed.

Journal Article•DOI•
TL;DR: A computer-aided diagnosis system, based on a two-level artificial neural network (ANN) architecture, trained, tested, and evaluated specifically on the problem of detecting lung cancer nodules found on digitized chest radiographs.
Abstract: In this work, the authors have developed a computer-aided diagnosis system, based on a two-level artificial neural network (ANN) architecture. This was trained, tested, and evaluated specifically on the problem of detecting lung cancer nodules found on digitized chest radiographs. The first ANN performs the detection of suspicious regions in a low-resolution image. The input to the second ANN are the curvature peaks computed for all pixels in each suspicious region. This comes from the fact that small tumors possess and identifiable signature in curvature-peak feature space, where curvature is the local curvature of the image data when viewed as a relief map. The output of this network is thresholded at a chosen level of significance to give a positive detection. Tests are performed using 60 radiographs taken from a routine clinic with 90 real nodules and 288 simulated nodules. The authors employed free-response receiver operating characteristics method with the mean number of false positives (FP's) and the sensitivity as performance indexes to evaluate all the simulation results. The combination of the two networks provide results of 89%-96% sensitivity and 5-7 FP's/image, depending on the size of the nodules.

Journal Article•DOI•
John G. Sled1, G.B. Pike•
TL;DR: This first principle analysis clarifies, for the general case of conducting objects, the relationship between the excitation field and the reception sensitivity of circularly and linearly polarized coils.
Abstract: Motivated by the observation that the diagonal pattern of intensity nonuniformity usually associated with linearly polarized radio-frequency (RF) coils is often present in neurological scans using circularly polarized coils, a theoretical analysis has been conducted of the intensity nonuniformity inherent in imaging an elliptically shaped object using 1.5-T magnets and circularly polarized RF coils. This first principle analysis clarifies, for the general case of conducting objects, the relationship between the excitation field and the reception sensitivity of circularly and linearly polarized coils. The results, validated experimentally using a standard spin-echo imaging sequence and an in vivo B/sub 1/ field mapping technique, are shown to be accurate to within 1%-2% root mean square, suggesting that these electromagnetic interactions with the object account for most of the intensity nonuniformity observed.

Journal Article•DOI•
TL;DR: OSC is faster than the convex algorithm, the amount of acceleration being approximately proportional to the number of subsets in OSC, and it causes only a slight increase of noise and global errors in the reconstructions.
Abstract: Iterative maximum likelihood (ML) transmission computed tomography algorithms have distinct advantages over Fourier-based reconstruction, but unfortunately require increased computation time. The convex algorithm is a relatively fast iterative ML algorithm but it is nevertheless too slow for many applications. Therefore, an acceleration of this algorithm by using ordered subsets of projections is proposed [ordered subsets convex algorithm (OSC)]. OSC applies the convex algorithm sequentially to subsets of projections, OSC was compared with the convex algorithm using simulated and physical thorax phantom data. Reconstructions were performed for OSC using eight and 16 subsets (eight and four projections/subset, respectively). Global errors, image noise, contrast recovery, and likelihood increase were calculated. Results show that OSC is faster than the convex algorithm, the amount of acceleration being approximately proportional to the number of subsets in OSC, and it causes only a slight increase of noise and global errors in the reconstructions. Images and image profiles of the reconstructions were in good agreement, In conclusion, OSC and the convex algorithm result in similar image quality but OSC is more than an order of magnitude faster.

Journal Article•DOI•
TL;DR: A new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT) is presented, which has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well.
Abstract: The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, /spl rho/(x), from the samples and then looking at the distribution of values that /spl rho/(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent.

Journal Article•DOI•
TL;DR: The authors proposed an approach for the interpolation of grey data of arbitrary dimensionality that generalized the shape-based method from binary to grey data, and showed preliminary evidence that it produced more accurate results than conventional grey-level interpolation methods.
Abstract: To aid in the display, manipulation, and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation. Traditional techniques consist of direct interpolation of the grey values. When user interaction is called for in image segmentation, as a consequence of these interpolation methods, the user needs to segment a much greater (typically 4-10/spl times/) amount of data. To mitigate this problem, a method called shape-based interpolation of binary data was developed. Resides significantly reducing user time, this method has been shown to provide more accurate results than grey-level interpolation. The authors proposed an approach for the interpolation of grey data of arbitrary dimensionality that generalized the shape-based method from binary to grey data. This method has characteristics similar to those of the binary shape-based method. In particular, the authors showed preliminary evidence that it produced more accurate results than conventional grey-level interpolation methods. In this paper, concentrating on the three-dimensional (3-D) interpolation problem, the authors compare statistically the accuracy of 8 different methods: nearest-neighbor, linear grey-level, grey-level cubic spline, grey-level modified cubic spline, Goshtasby et al. (1992), and 3 methods from the grey-level shape-based class. A population of patient magnetic resonance and computed tomography images, corresponding to different parts of the human anatomy, coming from different 3-D imaging applications, are utilized for comparison. Each slice in these data sets is estimated by each interpolation method and compared to the original slice at the same location using 3 measures: mean-squared difference, number of sites of disagreement, and largest difference. The methods are statistically compared pairwise based on these measures. The shape-based methods statistically significantly outperformed all other methods in all measures in all applications considered here with a statistical relevance ranging from 10% to 32% (mean=15%) for mean-squared difference.

Journal Article•DOI•
TL;DR: MR image texture may be a useful aid in the diagnosis and tracking of Alzheimer's disease, both as a diagnostic marker and as a measure of progression.
Abstract: The authors assess the value of magnetic resonance (MR) image texture in Alzheimer's disease (AD) both as a diagnostic marker and as a measure of progression. T/sub 1/-weighted MR scans were acquired from 40 normal controls and 24 AD patients. These were split into a training set (20 controls, 40 AD) and a test set (20 controls, 14 AD). In addition, five control subjects and five AD patients were scanned repeatedly over several years. On each scan a texture feature vector was evaluated over the brain; this consisted of 260 measures derived from the spatial gray-level dependence method. A stepwise discriminant analysis was applied to the training set, to obtain a linear discriminant function. In the test set, this function yielded significantly different values for the control and AD groups (p 0.05); for the repeatedly scanned AD patients the corresponding median increment of 1.4 was significantly different from zero (p<0.05). MR image texture may be a useful aid in the diagnosis and tracking of Alzheimer's disease.