scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 2005"


Journal ArticleDOI
TL;DR: This sensitivity analysis shows that although changes in the velocity fields can be observed, the characterization of the intra-aneurysmal flow patterns is not altered when the mean input flow, the flow division, the viscosity model, or mesh resolution are changed.
Abstract: Hemodynamic factors are thought to be implicated in the progression and rupture of intracranial aneurysms. Current efforts aim to study the possible associations of hemodynamic characteristics such as complexity and stability of intra-aneurysmal flow patterns, size and location of the region of flow impingement with the clinical history of aneurysmal rupture. However, there are no reliable methods for measuring blood flow patterns in vivo. In this paper, an efficient methodology for patient-specific modeling and characterization of the hemodynamics in cerebral aneurysms from medical images is described. A sensitivity analysis of the hemodynamic characteristics with respect to variations of several variables over the expected physiologic range of conditions is also presented. This sensitivity analysis shows that although changes in the velocity fields can be observed, the characterization of the intra-aneurysmal flow patterns is not altered when the mean input flow, the flow division, the viscosity model, or mesh resolution are changed. It was also found that the variable that has the greater impact on the computed flow fields is the geometry of the vascular structures. We conclude that with the proposed modeling pipeline clinical studies involving large numbers cerebral aneurysms are feasible.

569 citations


Journal ArticleDOI
TL;DR: A novel red lesion detection method is presented based on a hybrid approach, combining prior works by Spencer et al. (1996) and Frame (1998) with two important new contributions, including a new red lesions candidate detection system based on pixel classification.
Abstract: The robust detection of red lesions in digital color fundus photographs is a critical step in the development of automated screening systems for diabetic retinopathy. In this paper, a novel red lesion detection method is presented based on a hybrid approach, combining prior works by Spencer et al. (1996) and Frame et al. (1998) with two important new contributions. The first contribution is a new red lesion candidate detection system based on pixel classification. Using this technique, vasculature and red lesions are separated from the background of the image. After removal of the connected vasculature the remaining objects are considered possible red lesions. Second, an extensive number of new features are added to those proposed by Spencer-Frame. The detected candidate objects are classified using all features and a k-nearest neighbor classifier. An extensive evaluation was performed on a test set composed of images representative of those normally found in a screening set. When determining whether an image contains red lesions the system achieves a sensitivity of 100% at a specificity of 87%. The method is compared with several different automatic systems and is shown to outperform them all. Performance is close to that of a human expert examining the images for the presence of red lesions.

526 citations


Journal ArticleDOI
TL;DR: A symmetric formulation is proposed, which combines single- and double-layer potentials, and which is new to the field of EEG, although it has been applied to other problems in electromagnetism.
Abstract: The forward electroencephalography (EEG) problem involves finding a potential V from the Poisson equation /spl nabla//spl middot/(/spl sigma//spl nabla/V)=f, in which f represents electrical sources in the brain, and /spl sigma/ the conductivity of the head tissues. In the piecewise constant conductivity head model, this can be accomplished by the boundary element method (BEM) using a suitable integral formulation. Most previous work uses the same integral formulation, corresponding to a double-layer potential. We present a conceptual framework based on a well-known theorem (Theorem 1) that characterizes harmonic functions defined on the complement of a bounded smooth surface. This theorem says that such harmonic functions are completely defined by their values and those of their normal derivatives on this surface. It allows us to cast the previous BEM approaches in a unified setting and to develop two new approaches corresponding to different ways of exploiting the same theorem. Specifically, we first present a dual approach which involves a single-layer potential. Then, we propose a symmetric formulation, which combines single- and double-layer potentials, and which is new to the field of EEG, although it has been applied to other problems in electromagnetism. The three methods have been evaluated numerically using a spherical geometry with known analytical solution, and the symmetric formulation achieves a significantly higher accuracy than the alternative methods. Additionally, we present results with realistically shaped meshes. Beside providing a better understanding of the foundations of BEM methods, our approach appears to lead also to more efficient algorithms.

392 citations


Journal ArticleDOI
TL;DR: A validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images demonstrates that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities and shows that simulated data results can be extended to real data.
Abstract: This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.

381 citations


Journal ArticleDOI
TL;DR: An optoacoustic vascular imaging system that records these transients on the skin surface with an ultrasound transducer array and displays the images online, combining the merits and most compelling features of optics and ultrasound in a single high-contrast vascular imaging device.
Abstract: In optoacoustic imaging, short laser pulses irradiate highly scattering human tissue and adiabatically heat embedded absorbing structures, such as blood vessels, to generate ultrasound transients by means of the thermoelastic effect. We present an optoacoustic vascular imaging system that records these transients on the skin surface with an ultrasound transducer array and displays the images online. With a single laser pulse a complete optoacoustic B-mode image can be acquired. The optoacoustic system exploits the high intrinsic optical contrast of blood and provides high-contrast images without the need for contrast agents. The high spatial resolution of the system is determined by the acoustic propagation and is limited to the submillimeter range by our 7.5-MHz linear array transducer. A Q-switched alexandrite laser emitting short near-infrared laser pulses at a wavelength of 760 nm allows an imaging depth of a few centimeters. The system provides real-time images at frame-rates of 7.5 Hz and optionally displays the classically generated ultrasound image alongside the optoacoustic image. The functionality of the system was demonstrated in vivo on human finger, arm and leg. The proposed system combines the merits and most compelling features of optics and ultrasound in a single high-contrast vascular imaging device.

377 citations


Journal ArticleDOI
TL;DR: A new model to simulate the three-dimensional (3-D) growth of glioblastomas multiforma (GBMs), the most aggressive glial tumors, and a new coupling equation taking into account the mechanical influence of the tumor cells on the invaded tissues are proposed.
Abstract: We propose a new model to simulate the three-dimensional (3-D) growth of glioblastomas multiforma (GBMs), the most aggressive glial tumors. The GBM speed of growth depends on the invaded tissue: faster in white than in gray matter, it is stopped by the dura or the ventricles. These different structures are introduced into the model using an atlas matching technique. The atlas includes both the segmentations of anatomical structures and diffusion information in white matter fibers. We use the finite element method (FEM) to simulate the invasion of the GBM in the brain parenchyma and its mechanical interaction with the invaded structures (mass effect). Depending on the considered tissue, the former effect is modeled with a reaction-diffusion or a Gompertz equation, while the latter is based on a linear elastic brain constitutive equation. In addition, we propose a new coupling equation taking into account the mechanical influence of the tumor cells on the invaded tissues. The tumor growth simulation is assessed by comparing the in-silico GBM growth with the real growth observed on two magnetic resonance images (MRIs) of a patient acquired with 6 mo difference. Results show the feasibility of this new conceptual approach and justifies its further evaluation.

363 citations


Journal ArticleDOI
TL;DR: A new spatio-temporal elastic registration algorithm for motion reconstruction from a series of images to estimate displacement fields from two-dimensional ultrasound sequences of the heart, which uses a multiresolution optimization strategy to obtain a higher speed and robustness.
Abstract: We propose a new spatio-temporal elastic registration algorithm for motion reconstruction from a series of images. The specific application is to estimate displacement fields from two-dimensional ultrasound sequences of the heart. The basic idea is to find a spatio-temporal deformation field that effectively compensates for the motion by minimizing a difference with respect to a reference frame. The key feature of our method is the use of a semi-local spatio-temporal parametric model for the deformation using splines, and the reformulation of the registration task as a global optimization problem. The scale of the spline model controls the smoothness of the displacement field. Our algorithm uses a multiresolution optimization strategy to obtain a higher speed and robustness. We evaluated the accuracy of our algorithm using a synthetic sequence generated with an ultrasound simulation package, together with a realistic cardiac motion model. We compared our new global multiframe approach with a previous method based on pairwise registration of consecutive frames to demonstrate the benefits of introducing temporal consistency. Finally, we applied the algorithm to the regional analysis of the left ventricle. Displacement and strain parameters were evaluated showing significant differences between the normal and pathological segments, thereby illustrating the clinical applicability of our method.

344 citations


Journal ArticleDOI
TL;DR: A new airway segmentation method based on fuzzy connectivity that works on various types of scans (low-dose and regular dose, normal subjects and diseased subjects) without the need for the user to manually adjust any parameters is presented.
Abstract: The segmentation of the human airway tree from volumetric computed tomography (CT) images builds an important step for many clinical applications and for physiological studies. Previously proposed algorithms suffer from one or several problems: leaking into the surrounding lung parenchyma, the need for the user to manually adjust parameters, excessive runtime. Low-dose CT scans are increasingly utilized in lung screening studies, but segmenting them with traditional airway segmentation algorithms often yields less than satisfying results. In this paper, a new airway segmentation method based on fuzzy connectivity is presented. Small adaptive regions of interest are used that follow the airway branches as they are segmented. This has several advantages. It makes it possible to detect leaks early and avoid them, the segmentation algorithm can automatically adapt to changing image parameters, and the computing time is kept within moderate values. The new method is robust in the sense that it works on various types of scans (low-dose and regular dose, normal subjects and diseased subjects) without the need for the user to manually adjust any parameters. Comparison with a commonly used region-grow segmentation algorithm shows that the newly proposed method retrieves a significantly higher count of airway branches. A method that conducts accurate cross-sectional airway measurements on airways is presented as an additional processing step. Measurements are conducted in the original gray-level volume. Validation on a phantom shows that subvoxel accuracy is achieved for all airway sizes and airway orientations.

333 citations


Journal ArticleDOI
TL;DR: Using a minimal oversampling ratio and presampled kernel, this work is able to perform a three-dimensional reconstruction in one-eighth the time and requiring one-third the computer memory versus using an oversamplings ratio of two and a Kaiser-Bessel convolution kernel, while maintaining the same level of accuracy.
Abstract: Reconstruction of magnetic resonance images from data not falling on a Cartesian grid is a Fourier inversion problem typically solved using convolution interpolation, also known as gridding Gridding is simple and robust and has parameters, the grid oversampling ratio and the kernel width, that can be used to trade accuracy for computational memory and time reductions We have found that significant reductions in computation memory and time can be obtained while maintaining high accuracy by using a minimal oversampling ratio, from 1125 to 1375, instead of the typically employed grid oversampling ratio of two When using a minimal oversampling ratio, appropriate design of the convolution kernel is important for maintaining high accuracy We derive a simple equation for choosing the optimal Kaiser-Bessel convolution kernel for a given oversampling ratio and kernel width As well, we evaluate the effect of presampling the kernel, a common technique used to reduce the computation time, and find that using linear interpolation between samples adds negligible error with far less samples than is necessary with nearest-neighbor interpolation We also develop a new method for choosing the optimal presampled kernel Using a minimal oversampling ratio and presampled kernel, we are able to perform a three-dimensional (3-D) reconstruction in one-eighth the time and requiring one-third the computer memory versus using an oversampling ratio of two and a Kaiser-Bessel convolution kernel, while maintaining the same level of accuracy

326 citations


Journal ArticleDOI
TL;DR: A combination of mean-shift-based tracking processes to establish migrating cell trajectories through in vitro phase-contrast video microscopy and the methodology was applied on cancer cell tracking and showed that cytochalasin-D significantly inhibits cell motility.
Abstract: In this paper, we propose a combination of mean-shift-based tracking processes to establish migrating cell trajectories through in vitro phase-contrast video microscopy. After a recapitulation on how the mean-shift algorithm permits efficient object tracking we describe the proposed extension and apply it to the in vitro cell tracking problem. In this application, the cells are unmarked (i.e., no fluorescent probe is used) and are observed under classical phase-contrast microscopy. By introducing an adaptive combination of several kernels, we address several problems such as variations in size and shape of the tracked objects (e.g., those occurring in the case of cell membrane extensions), the presence of incomplete (or noncontrasted) object boundaries, partially overlapping objects and object splitting (in the case of cell divisions or mitoses). Comparing the tracking results automatically obtained to those generated manually by a human expert, we tested the stability of the different algorithm parameters and their effects on the tracking results. We also show how the method is resistant to a decrease in image resolution and accidental defocusing (which may occur during long experiments, e.g., dozens of hours). Finally, we applied our methodology on cancer cell tracking and showed that cytochalasin-D significantly inhibits cell motility.

314 citations


Journal ArticleDOI
TL;DR: This paper investigated several state-of-the-art machine-learning methods for automated classification of clustered microcalcifications (MCs), and formulated differentiation of malignant from benign MCs as a supervised learning problem, and applied these learning methods to develop the classification algorithm.
Abstract: In this paper, we investigate several state-of-the-art machine-learning methods for automated classification of clustered microcalcifications (MCs). The classifier is part of a computer-aided diagnosis (CADx) scheme that is aimed to assisting radiologists in making more accurate diagnoses of breast cancer on mammograms. The methods we considered were: support vector machine (SVM), kernel Fisher discriminant (KFD), relevance vector machine (RVM), and committee machines (ensemble averaging and AdaBoost), of which most have been developed recently in statistical learning theory. We formulated differentiation of malignant from benign MCs as a supervised learning problem, and applied these learning methods to develop the classification algorithm. As input, these methods used image features automatically extracted from clustered MCs. We tested these methods using a database of 697 clinical mammograms from 386 cases, which included a wide spectrum of difficult-to-classify cases. We analyzed the distribution of the cases in this database using the multidimensional scaling technique, which reveals that in the feature space the malignant cases are not trivially separable from the benign ones. We used receiver operating characteristic (ROC) analysis to evaluate and to compare classification performance by the different methods. In addition, we also investigated how to combine information from multiple-view mammograms of the same case so that the best decision can be made by a classifier. In our experiments, the kernel-based methods (i.e., SVM, KFD, and RVM) yielded the best performance (A/sub z/=0.85, SVM), significantly outperforming a well-established, clinically-proven CADx approach that is based on neural network (A/sub z/=0.80).

Journal ArticleDOI
TL;DR: Experimental simulations of a rat head imaged in a working small animal scanner indicate that direct parametric reconstruction can substantially reduce root-mean-squared error (RMSE) in the estimation of kinetic parameters, as compared to indirect methods, without appreciably increasing computation.
Abstract: Our goal in this paper is the estimation of kinetic model parameters for each voxel corresponding to a dense three-dimensional (3-D) positron emission tomography (PET) image. Typically, the activity images are first reconstructed from PET sinogram frames at each measurement time, and then the kinetic parameters are estimated by fitting a model to the reconstructed time-activity response of each voxel. However, this "indirect" approach to kinetic parameter estimation tends to reduce signal-to-noise ratio (SNR) because of the requirement that the sinogram data be divided into individual time frames. In 1985, Carson and Lange proposed, but did not implement, a method based on the expectation-maximization (EM) algorithm for direct parametric reconstruction. The approach is "direct" because it estimates the optimal kinetic parameters directly from the sinogram data, without an intermediate reconstruction step. However, direct voxel-wise parametric reconstruction remained a challenge due to the unsolved complexities of inversion and spatial regularization. In this paper, we demonstrate and evaluate a new and efficient method for direct voxel-wise reconstruction of kinetic parameter images using all frames of the PET data. The direct parametric image reconstruction is formulated in a Bayesian framework, and uses the parametric iterative coordinate descent (PICD) algorithm to solve the resulting optimization problem. The PICD algorithm is computationally efficient and is implemented with spatial regularization in the domain of the physiologically relevant parameters. Our experimental simulations of a rat head imaged in a working small animal scanner indicate that direct parametric reconstruction can substantially reduce root-mean-squared error (RMSE) in the estimation of kinetic parameters, as compared to indirect methods, without appreciably increasing computation.

Journal ArticleDOI
TL;DR: Application of STACS to a set of 48 real cardiac MR images shows that it can successfully segment the heart from its surroundings such as the chest wall and the heart structures (the left and right ventricles and the epicardium.)
Abstract: The paper presents a novel stochastic active contour scheme (STACS) for automatic image segmentation designed to overcome some of the unique challenges in cardiac MR images such as problems with low contrast, papillary muscles, and turbulent blood flow. STACS minimizes an energy functional that combines stochastic region-based and edge-based information with shape priors of the heart and local properties of the contour. The minimization algorithm solves, by the level set method, the Euler-Lagrange equation that describes the contour evolution. STACS includes an annealing schedule that balances dynamically the weight of the different terms in the energy functional. Three particularly attractive features of STACS are: 1) ability to segment images with low texture contrast by modeling stochastically the image textures; 2) robustness to initial contour and noise because of the utilization of both edge and region-based information; 3)ability to segment the heart from the chest wall and the undesired papillary muscles due to inclusion of heart shape priors. Application of STACS to a set of 48 real cardiac MR images shows that it can successfully segment the heart from its surroundings such as the chest wall and the heart structures (the left and right ventricles and the epicardium.) We compare STACS' automatically generated contours with manually-traced contours, or the "gold standard," using both area and edge similarity measures. This assessment demonstrates very good and consistent segmentation performance of STACS.

Journal ArticleDOI
TL;DR: A new framework to compute the displacement field in an iterative process, allowing the solution to gradually move from an approximation formulation to an interpolation formulation (least square minimization of the data error term), aiming at improving the robustness of the algorithm.
Abstract: We present a new algorithm to register 3-D preoperative magnetic resonance (MR) images to intraoperative MR images of the brain which have undergone brain shift. This algorithm relies on a robust estimation of the deformation from a sparse noisy set of measured displacements. We propose a new framework to compute the displacement field in an iterative process, allowing the solution to gradually move from an approximation formulation (minimizing the sum of a regularization term and a data error term) to an interpolation formulation (least square minimization of the data error term). An outlier rejection step is introduced in this gradual registration process using a weighted least trimmed squares approach, aiming at improving the robustness of the algorithm. We use a patient-specific model discretized with the finite element method in order to ensure a realistic mechanical behavior of the brain tissue. To meet the clinical time constraint, we parallelized the slowest step of the algorithm so that we can perform a full 3-D image registration in 35 s (including the image update time) on a heterogeneous cluster of 15 personal computers. The algorithm has been tested on six cases of brain tumor resection, presenting a brain shift of up to 14 mm. The results show a good ability to recover large displacements, and a limited decrease of accuracy near the tumor resection cavity.

Journal ArticleDOI
TL;DR: It is shown that the normalized Born approach accurately retrieves the position and shape of the fluorochrome even at high background heterogeneity, and that the quantification is relatively insensitive to a varying degree of heterogeneity and background optical properties.
Abstract: We studied the performance of three-dimensional fluorescence tomography of diffuse media in the presence of heterogeneities. Experimental measurements were acquired using an imaging system consisting of a parallel plate-imaging chamber and a lens coupled charge coupled device camera, which enables conventional planar imaging as well as fluorescence tomography. To simulate increasing levels of background heterogeneity, we employed phantoms made of a fluorescent tube surrounded by several absorbers in different combinations of absorption distribution. We also investigated the effect of low absorbing thin layers (such as membranes). We show that the normalized Born approach accurately retrieves the position and shape of the fluorochrome even at high background heterogeneity. We also demonstrate that the quantification is relatively insensitive to a varying degree of heterogeneity and background optical properties. Findings are further contrasted to images obtained with the standard Born expansion and with a normalized approach that divides the fluorescent field with excitation measurements through a homogeneous medium.

Journal ArticleDOI
TL;DR: A segmentation-by-registration scheme in which a scan with normal lungs is elastically registered to a scan containing pathology, which enjoys the additional benefit that it does not require pathological (hand-segmented) training data.
Abstract: Conventional methods of lung segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on scans with lungs that contain dense pathologies, and such scans occur frequently in clinical practice. We propose a segmentation-by-registration scheme in which a scan with normal lungs is elastically registered to a scan containing pathology. When the resulting transformation is applied to a mask of the normal lungs, a segmentation is found for the pathological lungs. As a mask of the normal lungs, a probabilistic segmentation built up out of the segmentations of 15 registered normal scans is used. To refine the segmentation, voxel classification is applied to a certain volume around the borders of the transformed probabilistic mask. Performance of this scheme is compared to that of three other algorithms: a conventional, a user-interactive and a voxel classification method. The algorithms are tested on 10 three-dimensional thin-slice computed tomography volumes containing high-density pathology. The resulting segmentations are evaluated by comparing them to manual segmentations in terms of volumetric overlap and border positioning measures. The conventional and user-interactive methods that start off with thresholding techniques fail to segment the pathologies and are outperformed by both voxel classification and the refined segmentation-by-registration. The refined registration scheme enjoys the additional benefit that it does not require pathological (hand-segmented) training data.

Journal ArticleDOI
TL;DR: A computer-aided diagnostic (CAD) scheme for distinction between benign and malignant nodules in LDCT scans by use of a massive training artificial neural network (MTANN) is developed, which may be useful in assisting radiologists in the diagnosis of lung nodulesIn LDCT.
Abstract: Low-dose helical computed tomography (LDCT) is being applied as a modality for lung cancer screening. It may be difficult, however, for radiologists to distinguish malignant from benign nodules in LDCT. Our purpose in this study was to develop a computer-aided diagnostic (CAD) scheme for distinction between benign and malignant nodules in LDCT scans by use of a massive training artificial neural network (MTANN). The MTANN is a trainable, highly nonlinear filter based on an artificial neural network. To distinguish malignant nodules from six different types of benign nodules, we developed multiple MTANNs (multi-MTANN) consisting of six expert MTANNs that are arranged in parallel. Each of the MTANNs was trained by use of input CT images and teaching images containing the estimate of the distribution for the "likelihood of being a malignant nodule", i.e., the teaching image for a malignant nodule contains a two-dimensional Gaussian distribution and that for a benign nodule contains zero. Each MTANN was trained independently with ten typical malignant nodules and ten benign nodules from each of the six types. The outputs of the six MTANNs were combined by use of an integration ANN such that the six types of benign nodules could be distinguished from malignant nodules. After training of the integration ANN, our scheme provided a value related to the "likelihood of malignancy" of a nodule, i.e., a higher value indicates a malignant nodule, and a lower value indicates a benign nodule. Our database consisted of 76 primary lung cancers in 73 patients and 413 benign nodules in 342 patients, which were obtained from a lung cancer screening program on 7847 screenees with LDCT for three years in Nagano, Japan. The performance of our scheme for distinction between benign and malignant nodules was evaluated by use of receiver operating characteristic (ROC) analysis. Our scheme achieved an Az (area under the ROC curve) value of 0.882 in a round-robin test. Our scheme correctly identified 100% (76/76) of malignant nodules as malignant, whereas 48% (200/413) of benign nodules were identified correctly as benign. Therefore, our scheme may be useful in assisting radiologists in the diagnosis of lung nodules in LDCT.

Journal ArticleDOI
TL;DR: This work introduces a standardized evaluation methodology, which can be used for all types of 2-D-3-D registration methods and for different applications and anatomies, and proposes standardized starting positions and failure criteria to allow future researchers to directly compare their methods.
Abstract: In the past few years, a number of two-dimensional (2-D) to three-dimensional (3-D) (2-D-3-D) registration algorithms have been introduced. However, these methods have been developed and evaluated for specific applications, and have not been directly compared. Understanding and evaluating their performance is therefore an open and important issue. To address this challenge we introduce a standardized evaluation methodology, which can be used for all types of 2-D-3-D registration methods and for different applications and anatomies. Our evaluation methodology uses the calibrated geometry of a 3-D rotational X-ray (3DRX) imaging system (Philips Medical Systems, Best, The Netherlands) in combination with image-based 3-D-3-D registration for attaining a highly accurate gold standard for 2-D X-ray to 3-D MR/CT/3DRX registration. Furthermore, we propose standardized starting positions and failure criteria to allow future researchers to directly compare their methods. As an illustration, the proposed methodology has been used to evaluate the performance of two 2-D-3-D registration techniques, viz. a gradient-based and an intensity-based method, for images of the spine. The data and gold standard transformations are available on the internet (http://www.isi.uu.nl/Research/Databases/).

Journal ArticleDOI
TL;DR: It is shown that intermittent transaxial truncation has no effect on the reconstruction in a central region which means that wider patients can be accommodated on existing scanners, and more importantly that radiation exposure can be reduced for region of interest imaging.
Abstract: This paper describes a flexible new methodology for accurate cone beam reconstruction with source positions on a curve (or set of curves). The inversion formulas employed by this methodology are based on first backprojecting a simple derivative in the projection space and then applying a Hilbert transform inversion in the image space. The local nature of the projection space filtering distinguishes this approach from conventional filtered-backprojection methods. This characteristic together with a degree of flexibility in choosing the direction of the Hilbert transform used for inversion offers two important features for the design of data acquisition geometries and reconstruction algorithms. First, the size of the detector necessary to acquire sufficient data for accurate reconstruction of a given region is often smaller than that required by previously documented approaches. In other words, more data truncation is allowed. Second, redundant data can be incorporated for the purpose of noise reduction. The validity of the inversion formulas along with the application of these two properties are illustrated with reconstructions from computer simulated data. In particular, in the helical cone beam geometry, it is shown that 1) intermittent transaxial truncation has no effect on the reconstruction in a central region which means that wider patients can be accommodated on existing scanners, and more importantly that radiation exposure can be reduced for region of interest imaging and 2) at maximum pitch the data outside the Tam-Danielsson window can be used to reduce image noise and thereby improve dose utilization. Furthermore, the degree of axial truncation tolerated by our approach for saddle trajectories is shown to be larger than that of previous methods.

Journal ArticleDOI
TL;DR: A fully automated computer-aided detection (CAD) system for detecting prostatic adenocarcinoma from 4 Tesla ex vivo magnetic resonance (MR) imagery of the prostate that performed better than the experts in terms of accuracy and intrasystem variability.
Abstract: Prostatic adenocarcinoma is the most commonly occurring cancer among men in the United States, second only to skin cancer. Currently, the only definitive method to ascertain the presence of prostatic cancer is by trans-rectal ultrasound (TRUS) directed biopsy. Owing to the poor image quality of ultrasound, the accuracy of TRUS is only 20%-25%. High-resolution magnetic resonance imaging (MRI) has been shown to have a higher accuracy of prostate cancer detection compared to ultrasound. Consequently, several researchers have been exploring the use of high resolution MRI in performing prostate biopsies. Visual detection of prostate cancer, however, continues to be difficult owing to its apparent lack of shape, and the fact that several malignant and benign structures have overlapping intensity and texture characteristics. In this paper, we present a fully automated computer-aided detection (CAD) system for detecting prostatic adenocarcinoma from 4 Tesla ex vivo magnetic resonance (MR) imagery of the prostate. After the acquired MR images have been corrected for background inhomogeneity and nonstandardness, novel three-dimensional (3-D) texture features are extracted from the 3-D MRI scene. A Bayesian classifier then assigns each image voxel a "likelihood" of malignancy for each feature independently. The "likelihood" images generated in this fashion are then combined using an optimally weighted feature combination scheme. Quantitative evaluation was performed by comparing the CAD results with the manually ascertained ground truth for the tumor on the MRI. The tumor labels on the MR slices were determined manually by an expert by visually registering the MR slices with the corresponding regions on the histology slices. We evaluated our CAD system on a total of 33 two-dimensional (2-D) MR slices from five different 3-D MR prostate studies. Five slices from two different glands were used for training. Our feature combination scheme was found to outperform the individual texture features, and also other popularly used feature combination methods, including AdaBoost, ensemble averaging, and majority voting. Further, in several instances our CAD system performed better than the experts in terms of accuracy, the expert segmentations being determined solely from visual inspection of the MRI data. In addition, the intrasystem variability (changes in CAD accuracy with changes in values of system parameters) was significantly lower than the corresponding intraobserver and interobserver variability. CAD performance was found to be very similar for different training sets. Future work will focus on extending the methodology to guide high-resolution MRI-assisted in vivo prostate biopsies.

Journal ArticleDOI
TL;DR: A novel approach to vessel tree reconstruction and its application to nodule detection in thoracic CT scans was developed by using correlation-based enhancement filters and a fuzzy shape representation of the data based on regulated morphological operations that are less sensitive to noise.
Abstract: Vessel tree reconstruction in volumetric data is a necessary prerequisite in various medical imaging applications. Specifically, when considering the application of automated lung nodule detection in thoracic computed tomography (CT) scans, vessel trees can be used to resolve local ambiguities based on global considerations and so improve the performance of nodule detection algorithms. In this study, a novel approach to vessel tree reconstruction and its application to nodule detection in thoracic CT scans was developed by using correlation-based enhancement filters and a fuzzy shape representation of the data. The proposed correlation-based enhancement filters depend on first-order partial derivatives and so are less sensitive to noise compared with Hessian-based filters. Additionally, multiple sets of eigenvalues are used so that a distinction between nodules and vessel junctions becomes possible. The proposed fuzzy shape representation is based on regulated morphological operations that are less sensitive to noise. Consequently, the vessel tree reconstruction algorithm can accommodate vessel bifurcation and discontinuities. A quantitative performance evaluation of the enhancement filters and of the vessel tree reconstruction algorithm was performed. Moreover, the proposed vessel tree reconstruction algorithm reduced the number of false positives generated by an existing nodule detection algorithm by 38%.

Journal ArticleDOI
TL;DR: This work presents a novel texture and shape priors based method for kidney segmentation in ultrasound (US) images that is demonstrated through experimental results on both natural images and US data compared with other image segmentation methods and manual segmentation.
Abstract: This work presents a novel texture and shape priors based method for kidney segmentation in ultrasound (US) images. Texture features are extracted by applying a bank of Gabor filters on test images through a two-sided convolution strategy. The texture model is constructed via estimating the parameters of a set of mixtures of half-planed Gaussians using the expectation-maximization method. Through this texture model, the texture similarities of areas around the segmenting curve are measured in the inside and outside regions, respectively. We also present an iterative segmentation framework to combine the texture measures into the parametric shape model proposed by Leventon and Faugeras. Segmentation is implemented by calculating the parameters of the shape model to minimize a novel energy function. The goal of this energy function is to partition the test image into two regions, the inside one with high texture similarity and low texture variance, and the outside one with high texture variance. The effectiveness of this method is demonstrated through experimental results on both natural images and US data compared with other image segmentation methods and manual segmentation.

Journal ArticleDOI
TL;DR: Algorithms that perform both matching of branchpoints and anatomical labeling of in vivo trees without any human intervention and within a short computing time are presented.
Abstract: Matching of corresponding branchpoints between two human airway trees, as well as assigning anatomical names to the segments and branchpoints of the human airway tree, are of significant interest for clinical applications and physiological studies. In the past, these tasks were often performed manually due to the lack of automated algorithms that can tolerate false branches and anatomical variability typical for in vivo trees. In this paper, we present algorithms that perform both matching of branchpoints and anatomical labeling of in vivo trees without any human intervention and within a short computing time. No hand-pruning of false branches is required. The results from the automated methods show a high degree of accuracy when validated against reference data provided by human experts. 92.9% of the verifiable branchpoint matches found by the computer agree with experts' results. For anatomical labeling, 97.1% of the automatically assigned segment labels were found to be correct.

Journal ArticleDOI
TL;DR: An automated image analysis method for quantification of in vitro angiogenesis is presented and correctly indicates the inhibitory effect of suramin and the stimulatory effect of vascular endothelial growth factor.
Abstract: An automated image analysis method for quantification of in vitro angiogenesis is presented. The method is designed for in vitro angiogenesis assays that are based on co-culturing endothelial cells with fibroblasts. Such assays are used in many current studies in which anti-angiogenic agents for the treatment of cancer are being sought. This search requires accurate quantification of the stimulatory and inhibitory effects of the different agents. The quantification method gives lengths and sizes of the tubule complexes as well as the numbers of junctions in each of them. The method is tested with a set of test images obtained with a commercially available in vitro angiogenesis assay. The results correctly indicate the inhibitory effect of suramin and the stimulatory effect of vascular endothelial growth factor. Moreover, the image analysis method is shown to be robust against variations in illumination. We have implemented a software package that utilizes the methods. The software as well as a set of test images are available at http://www.cs.tut.fi/sgn/csb/angioquant/.

Journal ArticleDOI
TL;DR: This work presents a novel spatial mixture model within a fully Bayesian framework with the ability to perform fully adaptive spatial regularization using Markov random fields, and examines the behavior of this model when applied to artificial data with different spatial characteristics, and to functional magnetic resonance imaging SPMs.
Abstract: Mixture models are often used in the statistical segmentation of medical images. For example, they can be used for the segmentation of structural images into different matter types or of functional statistical parametric maps (SPMs) into activations and nonactivations. Nonspatial mixture models segment using models of just the histogram of intensity values. Spatial mixture models have also been developed which augment this histogram information with spatial regularization using Markov random fields. However, these techniques have control parameters, such as the strength of spatial regularization, which need to be tuned heuristically to particular datasets. We present a novel spatial mixture model within a fully Bayesian framework with the ability to perform fully adaptive spatial regularization using Markov random fields. This means that the amount of spatial regularization does not have to be tuned heuristically but is adaptively determined from the data. We examine the behavior of this model when applied to artificial data with different spatial characteristics, and to functional magnetic resonance imaging SPMs.

Journal ArticleDOI
TL;DR: The purpose of this research, the Visible Korean Human (VKH), is to produce an enhanced version of the serially sectioned images of an entire cadaver that can be used to upgrade the 3D images and software.
Abstract: The data from the Visible Human Project (VHP) and the Chinese Visible Human (CVH), which are the serially sectioned images of the entire cadaver, are being used to produce three-dimensional (3-D) images and software. The purpose of our research, the Visible Korean Human (VKH), is to produce an enhanced version of the serially sectioned images of an entire cadaver that can be used to upgrade the 3-D images and software. These improvements are achieved without drastically changing the methods developed for the VHP and CVH; thus, a complementary solution was found. A Korean male cadaver was chosen without anything perfused into the cadaver; the entire body was magnetic resonance (MR) and computed tomography (CT) scanned at 1.0-mm intervals to produce MR and CT images. After scanning, entire body of the cadaver was embedded and serially sectioned at 0.2-mm intervals; each sectioned surface was inputted into a personal computer to produce anatomical images (pixel size: 0.2 mm) without any missing images. Eleven anatomical organs in the anatomical images were segmented to produce segmented images. The anatomical and segmented images were stacked and reconstructed to produce 3-D images. The VKH is an ongoing research; we will produce a female version of the VKH and provide more detailed segmented images. The data from the VHP, CVH, and VKH will provide valuable resources to the medical image library of 3-D images and software in the field of medical education and clinical trials.

Journal ArticleDOI
TL;DR: The aim of this work was to extend a technique based on optical tracking to register MR and X-ray images obtained from the sliding table XMR configuration by providing an improved calibration stage, real-time guidance during cardiovascular catheterization procedures, and further off-line analysis for mapping cardiac electrical data to patient anatomy.
Abstract: The hybrid magnetic resonance (MR)/X-ray suite (XMR) is a recently introduced imaging solution that provides new possibilities for guidance of cardiovascular catheterization procedures. We have previously described and validated a technique based on optical tracking to register MR and X-ray images obtained from the sliding table XMR configuration. The aim of our recent work was to extend our technique by providing an improved calibration stage, real-time guidance during cardiovascular catheterization procedures, and further off-line analysis for mapping cardiac electrical data to patient anatomy. Specially designed optical trackers and a dedicated calibration object have resulted in a single calibration step that can be efficiently checked and updated before each procedure. An X-ray distortion model has been implemented that allows for distortion correction for arbitrary c-arm orientations. During procedures, the guidance system provides a real-time combined MR/X-ray image display consisting of live X-ray images with registered recently acquired MR derived anatomy. It is also possible to reconstruct the location of catheters seen during X-ray imaging in the MR derived patient anatomy. We have applied our registration technique to 13 cardiovascular catheterization procedures. Our system has been used for the real-time guidance of ten radiofrequency ablations and one aortic stent implantation. We demonstrate the real-time guidance using two exemplar cases. In a further two cases we show how off-line analysis of registered image data, acquired during electrophysiology study procedures, has been used to map cardiac electrical measurements to patient anatomy for two different types of mapping catheters. The cardiologists that have used the guidance system suggest that real-time XMR guidance could have substantial value in difficult interventional and electrophysiological procedures, potentially reducing procedure time and delivered radiation dose. Also, the ability to map measured electrical data to patient specific anatomy provides improved visualization and a path to investigation of cardiac electromechanical models.

Journal ArticleDOI
TL;DR: A method to match diffusion tensor magnetic resonance images (DT-MRIs) through the large deformation diffeomorphic metric mapping of vector fields, focusing on the fiber orientations, considered as unit vector fields on the image volume is proposed.
Abstract: This paper proposes a method to match diffusion tensor magnetic resonance images (DT-MRIs) through the large deformation diffeomorphic metric mapping of vector fields, focusing on the fiber orientations, considered as unit vector fields on the image volume. We study a suitable action of diffeomorphisms on such vector fields, and provide an extension of the Large Deformation Diffeomorphic Metric Mapping framework to this type of dataset, resulting in optimizing for geodesics on the space of diffeomorphisms connecting two images. Existence of the minimizers under smoothness assumptions on the compared vector fields is proved, and coarse to fine hierarchical strategies are detailed, to reduce both ambiguities and computation load. This is illustrated by numerical experiments on DT-MRI heart images.

Journal ArticleDOI
TL;DR: A novel definition of tensor "distance" grounded in concepts from information theory is presented and incorporated in a region based active contour model for DTI segmentation.
Abstract: In recent years, diffusion tensor imaging (DTI) has become a popular in vivo diagnostic imaging technique in Radiological sciences. In order for this imaging technique to be more effective, proper image analysis techniques suited for analyzing these high dimensional data need to be developed. In this paper, we present a novel definition of tensor "distance" grounded in concepts from information theory and incorporate it in the segmentation of DTI. In a DTI, the symmetric positive definite (SPD) diffusion tensor at each voxel can be interpreted as the covariance matrix of a local Gaussian distribution. Thus, a natural measure of dissimilarity between SPD tensors would be the Kullback-Leibler (KL) divergence or its relative. We propose the square root of the J-divergence (symmetrized KL) between two Gaussian distributions corresponding to the diffusion tensors being compared and this leads to a novel closed form expression for the "distance" as well as the mean value of a DTI. Unlike the traditional Frobenius norm-based tensor distance, our "distance" is affine invariant, a desirable property in segmentation and many other applications. We then incorporate this new tensor "distance" in a region based active contour model for DTI segmentation. Synthetic and real data experiments are shown to depict the performance of the proposed model.

Journal ArticleDOI
TL;DR: This paper considers a recent promising robust Capon beamformer (RCB), which restores the appeal of SCB including its high resolution and superb interference suppression capabilities, and also retains the attractiveness of DAS including its robustness against steering vector errors.
Abstract: Currently, the nonadaptive delay-and-sum (DAS) beamformer is extensively used for ultrasound imaging, despite the fact that it has lower resolution and worse interference suppression capability than the adaptive standard Capon beamformer (SCB) if the steering vector corresponding to the signal of interest (SOI) is accurately known. The main problem which restricts the use of SCB, however, is that SCB lacks robustness against steering vector errors that are inevitable in practice. Whenever this happens, the performance of SCB may become worse than that of DAS. Therefore, a robust adaptive beamformer is desirable to maintain the robustness of DAS and adaptivity of SCB. In this paper we consider a recent promising robust Capon beamformer (RCB) for ultrasound imaging. We propose two ways of implementing RCB, one based on time delay and the other based on time reversal. RCB extends SCB by allowing the array steering vector to be within an uncertainty set. Hence, it restores the appeal of SCB including its high resolution and superb interference suppression capabilities, and also retains the attractiveness of DAS including its robustness against steering vector errors. The time-delay-based RCB can tolerate the misalignment of data samples and the time-reversal-based RCB can withstand the uncertainty of the Green's function. Both time-delay-based RCB and time-reversal-based RCB can be efficiently computed at a comparable cost to SCB. The excellent performances of the proposed robust adaptive beamforming approaches are demonstrated via a number of simulated and experimental examples.