scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 2012"


Journal ArticleDOI
TL;DR: The results show that the proposed approach might produce better images with lower noise and more detailed structural features in the authors' selected cases, however, there is no proof that this is true for all kinds of structures.
Abstract: Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures.

603 citations


Journal ArticleDOI
TL;DR: It is concluded that tissue overlap and image similarity, whether used alone or together, do not provide valid evidence for accurate registrations and should thus not be reported or accepted as such.
Abstract: The accuracy of nonrigid image registrations is commonly approximated using surrogate measures such as tissue label overlap scores, image similarity, image difference, or transformation inverse consistency error. This paper provides experimental evidence that these measures, even when used in combination, cannot distinguish accurate from inaccurate registrations. To this end, we introduce a “registration” algorithm that generates highly inaccurate image transformations, yet performs extremely well in terms of the surrogate measures. Of the tested criteria, only overlap scores of localized anatomical regions reliably distinguish reasonable from inaccurate registrations, whereas image similarity and tissue overlap do not. We conclude that tissue overlap and image similarity, whether used alone or together, do not provide valid evidence for accurate registrations and should thus not be reported or accepted as such.

387 citations


Journal ArticleDOI
TL;DR: The combination of the new hardware and software allows the first clinical 3-D microwave tomographic images of the breast to be produced, which are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center.
Abstract: Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring.

323 citations


Journal ArticleDOI
TL;DR: This work presents l1 -SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes, and proposes a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain.
Abstract: We present l1 -SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l1-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l1 -SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l1 -SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

291 citations


Journal ArticleDOI
TL;DR: The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually.
Abstract: Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled ( k,t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method.

285 citations


Journal ArticleDOI
TL;DR: Numerical experiments with synthetic and real in vivo human data illustrate that cone-filter preconditioners accelerate the proposed ADMM resulting in fast convergence of ADMM compared to conventional and state-of-the-art algorithms that are applicable for CT.
Abstract: Statistical image reconstruction using penalized weighted least-squares (PWLS) criteria can improve image-quality in X-ray computed tomography (CT). However, the huge dynamic range of the statistical weights leads to a highly shift-variant inverse problem making it difficult to precondition and accelerate existing iterative algorithms that attack the statistical model directly. We propose to alleviate the problem by using a variable-splitting scheme that separates the shift-variant and ("nearly") invariant components of the statistical data model and also decouples the regularization term. This leads to an equivalent constrained problem that we tackle using the classical method-of-multipliers framework with alternating minimization. The specific form of our splitting yields an alternating direction method of multipliers (ADMM) algorithm with an inner-step involving a "nearly" shift-invariant linear system that is suitable for FFT-based preconditioning using cone-type filters. The proposed method can efficiently handle a variety of convex regularization criteria including smooth edge-preserving regularizers and non- smooth sparsity-promoting ones based on the l1-norm and total variation. Numerical experiments with synthetic and real in vivo human data illustrate that cone-filter preconditioners accelerate the proposed ADMM resulting in fast convergence of ADMM compared to conventional (nonlinear conjugate gradient, ordered subsets) and state-of-the-art (MFISTA, split-Bregman) algorithms that are applicable for CT.

275 citations


Journal ArticleDOI
Andac Hamamci1, N. Kucuk, Kutlay Karaman, Kayihan Engin, Gozde Unal1 
TL;DR: A cellular automata based seeded tumor segmentation method on contrast enhanced T1 weighted magnetic resonance images, which standardizes the volume of interest (VOI) and seed selection, and an algorithm based on CA is presented to differentiate necrotic and enhancing tumor tissue content, which gains importance for a detailed assessment of radiation therapy response.
Abstract: In this paper, we present a fast and robust practical tool for segmentation of solid tumors with minimal user interaction to assist clinicians and researchers in radiosurgery planning and assessment of the response to the therapy. Particularly, a cellular automata (CA) based seeded tumor segmentation method on contrast enhanced T1 weighted magnetic resonance (MR) images, which standardizes the volume of interest (VOI) and seed selection, is proposed. First, we establish the connection of the CA-based segmentation to the graph-theoretic methods to show that the iterative CA framework solves the shortest path problem. In that regard, we modify the state transition function of the CA to calculate the exact shortest path solution. Furthermore, a sensitivity parameter is introduced to adapt to the heterogeneous tumor segmentation problem, and an implicit level set surface is evolved on a tumor probability map constructed from CA states to impose spatial smoothness. Sufficient information to initialize the algorithm is gathered from the user simply by a line drawn on the maximum diameter of the tumor, in line with the clinical practice. Furthermore, an algorithm based on CA is presented to differentiate necrotic and enhancing tumor tissue content, which gains importance for a detailed assessment of radiation therapy response. Validation studies on both clinical and synthetic brain tumor datasets demonstrate 80%-90% overlap performance of the proposed algorithm with an emphasis on less sensitivity to seed initialization, robustness with respect to different and heterogeneous tumor types, and its efficiency in terms of computation time.

275 citations


Journal ArticleDOI
TL;DR: This work proposes an automated graph partitioning scheme that is able to segment mitochondria at a performance level close to that of a human annotator, and outperforms a state-of-the-art 3-D segmentation technique.
Abstract: It is becoming increasingly clear that mitochondria play an important role in neural function Recent studies show mitochondrial morphology to be crucial to cellular physiology and synaptic function and a link between mitochondrial defects and neuro-degenerative diseases is strongly suspected Electron microscopy (EM), with its very high resolution in all three directions, is one of the key tools to look more closely into these issues but the huge amounts of data it produces make automated analysis necessary State-of-the-art computer vision algorithms designed to operate on natural 2-D images tend to perform poorly when applied to EM data for a number of reasons First, the sheer size of a typical EM volume renders most modern segmentation schemes intractable Furthermore, most approaches ignore important shape cues, relying only on local statistics that easily become confused when confronted with noise and textures inherent in the data Finally, the conventional assumption that strong image gradients always correspond to object boundaries is violated by the clutter of distracting membranes In this work, we propose an automated graph partitioning scheme that addresses these issues It reduces the computational complexity by operating on supervoxels instead of voxels, incorporates shape features capable of describing the 3-D shape of the target objects, and learns to recognize the distinctive appearance of true boundaries Our experiments demonstrate that our approach is able to segment mitochondria at a performance level close to that of a human annotator, and outperforms a state-of-the-art 3-D segmentation technique

265 citations


Journal ArticleDOI
TL;DR: A fusion scheme that obtained superior results is presented, demonstrating that there is complementary information provided by the different algorithms and there is still room for further improvements in airway segmentation algorithms.
Abstract: This paper describes a framework for establishing a reference airway tree segmentation, which was used to quantitatively evaluate 15 different airway tree extraction algorithms in a standardized manner. Because of the sheer difficulty involved in manually constructing a complete reference standard from scratch, we propose to construct the reference using results from all algorithms that are to be evaluated. We start by subdividing each segmented airway tree into its individual branch segments. Each branch segment is then visually scored by trained observers to determine whether or not it is a correctly segmented part of the airway tree. Finally, the reference airway trees are constructed by taking the union of all correctly extracted branch segments. Fifteen airway tree extraction algorithms from different research groups are evaluated on a diverse set of 20 chest computed tomography (CT) scans of subjects ranging from healthy volunteers to patients with severe pathologies, scanned at different sites, with different CT scanner brands, models, and scanning protocols. Three performance measures covering different aspects of segmentation quality were computed for all participating algorithms. Results from the evaluation showed that no single algorithm could extract more than an average of 74% of the total length of all branches in the reference standard, indicating substantial differences between the algorithms. A fusion scheme that obtained superior results is presented, demonstrating that there is complementary information provided by the different algorithms and there is still room for further improvements in airway segmentation algorithms.

241 citations


Journal ArticleDOI
TL;DR: A novel multiscale framework that models all brain networks generated over every possible threshold and is based on persistent homology and its various representations such as the Rips filtration, barcodes, and dendrograms to quantify various persistent topological features at different scales in a coherent manner.
Abstract: The brain network is usually constructed by estimating the connectivity matrix and thresholding it at an arbitrary level. The problem with this standard method is that we do not have any generally accepted criteria for determining a proper threshold. Thus, we propose a novel multiscale framework that models all brain networks generated over every possible threshold. Our approach is based on persistent homology and its various representations such as the Rips filtration, barcodes, and dendrograms. This new persistent homological framework enables us to quantify various persistent topological features at different scales in a coherent manner. The barcode is used to quantify and visualize the evolutionary changes of topological features such as the Betti numbers over different scales. By incorporating additional geometric information to the barcode, we obtain a single linkage dendrogram that shows the overall evolution of the network. The difference between the two networks is then measured by the Gromov-Hausdorff distance over the dendrograms. As an illustration, we modeled and differentiated the FDG-PET based functional brain networks of 24 attention-deficit hyperactivity disorder children, 26 autism spectrum disorder children, and 11 pediatric control subjects.

237 citations


Journal ArticleDOI
TL;DR: A novel synergistic boundary and region-based active contour model that incorporates shape priors in a level set formulation with automated initialization based on watershed that easily outperforms two state of the art segmentation schemes and is able to resolve up to 91% of overlapping/occluded structures in the images.
Abstract: Active contours and active shape models (ASM) have been widely employed in image segmentation. A major limitation of active contours, however, is in their 1) inability to resolve boundaries of intersecting objects and to 2) handle occlusion. Multiple overlapping objects are typically segmented out as a single object. On the other hand, ASMs are limited by point correspondence issues since object landmarks need to be identified across multiple objects for initial object alignment. ASMs are also are constrained in that they can usually only segment a single object in an image. In this paper, we present a novel synergistic boundary and region-based active contour model that incorporates shape priors in a level set formulation with automated initialization based on watershed. We demonstrate an application of these synergistic active contour models using multiple level sets to segment nuclear and glandular structures on digitized histopathology images of breast and prostate biopsy specimens. Unlike previous related approaches, our model is able to resolve object overlap and separate occluded boundaries of multiple objects simultaneously. The energy functional of the active contour is comprised of three terms. The first term is the prior shape term, modeled on the object of interest, thereby constraining the deformation achievable by the active contour. The second term, a boundary-based term detects object boundaries from image gradients. The third term drives the shape prior and the contour towards the object boundary based on region statistics. The results of qualitative and quantitative evaluation on 100 prostate and 14 breast cancer histology images for the task of detecting and segmenting nuclei and lymphocytes reveals that the model easily outperforms two state of the art segmentation schemes (geodesic active contour and Rousson shape-based model) and on average is able to resolve up to 91% of overlapping/occluded structures in the images.

Journal ArticleDOI
TL;DR: It was observed that averaging texture descriptors of a same distance impacts negatively the classification performance, while regarding the single texture features, the quantization level does not impact the discrimination power, since AUC=0.87 was obtained for the six quantization levels.
Abstract: In this paper, we investigated the behavior of 22 co-occurrence statistics combined to six gray-scale quantization levels to classify breast lesions on ultrasound (BUS) images. The database of 436 BUS images used in this investigation was formed by 217 carcinoma and 219 benign lesions images. The region delimited by a minimum bounding rectangle around the lesion was employed to calculate the gray-level co-occurrence matrix (GLCM). Next, 22 co-occurrence statistics were computed regarding six quantization levels (8, 16, 32, 64, 128, and 256), four orientations (0° , 45° , 90° , and 135° ), and ten distances (1, 2,...,10 pixels). Also, to reduce feature space dimensionality, texture descriptors of the same distance were averaged over all orientations, which is a common practice in the literature. Thereafter, the feature space was ranked using mutual information technique with minimal-redundancy-maximal-relevance (mRMR) criterion. Fisher linear discriminant analysis (FLDA) was applied to assess the discrimination power of texture features, by adding the first m-ranked features to the classification procedure iteratively until all of them were considered. The area under ROC curve (AUC) was used as figure of merit to measure the performance of the classifier. It was observed that averaging texture descriptors of a same distance impacts negatively the classification performance, since the best AUC of 0.81 was achieved with 32 gray levels and 109 features. On the other hand, regarding the single texture features (i.e., without averaging procedure), the quantization level does not impact the discrimination power, since AUC=0.87 was obtained for the six quantization levels. Moreover, the number of features was reduced (between 17 and 24 features). The texture descriptors that contributed notably to distinguish breast lesions were contrast and correlation computed from GLCMs with orientation of 90° and distance more than five pixels.

Journal ArticleDOI
TL;DR: A generative approach for simultaneously registering a probabilistic atlas of a healthy population to brain magnetic resonance (MR) scans showing glioma and segmenting the scans into tumor as well as healthy tissue labels is presented.
Abstract: We present a generative approach for simultaneously registering a probabilistic atlas of a healthy population to brain magnetic resonance (MR) scans showing glioma and segmenting the scans into tumor as well as healthy tissue labels. The proposed method is based on the expectation maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the original atlas into one with tumor and edema adapted to best match a given set of patient's images. The modified atlas is registered into the patient space and utilized for estimating the posterior probabilities of various tissue labels. EM iteratively refines the estimates of the posterior probabilities of tissue labels, the deformation field and the tumor growth model parameters. Hence, in addition to segmentation, the proposed method results in atlas registration and a low-dimensional description of the patient scans through estimation of tumor model parameters. We validate the method by automatically segmenting 10 MR scans and comparing the results to those produced by clinical experts and two state-of-the-art methods. The resulting segmentations of tumor and edema outperform the results of the reference methods, and achieve a similar accuracy from a second human rater. We additionally apply the method to 122 patients scans and report the estimated tumor model parameters and their relations with segmentation and registration results. Based on the results from this patient population, we construct a statistical atlas of the glioma by inverting the estimated deformation fields to warp the tumor segmentations of patients scans into a common space.

Journal ArticleDOI
TL;DR: In a typical parallel magnetic resonance imaging reconstruction experiment, the bias in the overly optimistic results obtained with rasterized simulations-the inverse-crime situation is quantified.
Abstract: The quantitative validation of reconstruction algorithms requires reliable data Rasterized simulations are popular but they are tainted by an aliasing component that impacts the assessment of the performance of reconstruction We introduce analytical simulation tools that are suited to parallel magnetic resonance imaging and allow one to build realistic phantoms The proposed phantoms are composed of ellipses and regions with piecewise-polynomial boundaries, including spline contours, Bezier contours, and polygons In addition, they take the channel sensitivity into account, for which we investigate two possible models Our analytical formulations provide well-defined data in both the spatial and k-space domains Our main contribution is the closed-form determination of the Fourier transforms that are involved Experiments validate the proposed implementation In a typical parallel magnetic resonance imaging reconstruction experiment, we quantify the bias in the overly optimistic results obtained with rasterized simulations-the inverse-crime situation We provide a package that implements the different simulations and provide tools to guide the design of realistic phantoms

Journal ArticleDOI
TL;DR: A novel shear elasticity imaging technique, comb-push ultrasound shear elastography (CUSE), is introduced in which only one rapid data acquisition is needed to reconstruct a full field-of-view 2-D shear wave speed map.
Abstract: Fast and accurate tissue elasticity imaging is essential in studying dynamic tissue mechanical properties. Various ultrasound shear elasticity imaging techniques have been developed in the last two decades. However, to reconstruct a full field-of-view 2-D shear elasticity map, multiple data acquisitions are typically required. In this paper, a novel shear elasticity imaging technique, comb-push ultrasound shear elastography (CUSE), is introduced in which only one rapid data acquisition (less than 35 ms) is needed to reconstruct a full field-of-view 2-D shear wave speed map (40 × 38 mm). Multiple unfocused ultrasound beams arranged in a comb pattern (comb-push) are used to generate shear waves. A directional filter is then applied upon the shear wave field to extract the left-to-right (LR) and right-to-left (RL) propagating shear waves. Local shear wave speed is recovered using a time-of-flight method based on both LR and RL waves. Finally, a 2-D shear wave speed map is reconstructed by combining the LR and RL speed maps. Smooth and accurate shear wave speed maps are reconstructed using the proposed CUSE method in two calibrated homogeneous phantoms with different moduli. Inclusion phantom experiments demonstrate that CUSE is capable of providing good contrast (contrast-to-noise ratio ≥25 dB) between the inclusion and background without artifacts and is insensitive to inclusion positions. Safety measurements demonstrate that all regulated parameters of the ultrasound output level used in CUSE sequence are well below the FDA limits for diagnostic ultrasound.

Journal ArticleDOI
TL;DR: This paper proposes a maximum-a-posteriori reconstruction algorithm for jointly estimating the attenuation and activity distributions from TOF PET data, and shows that the availability of time-of-flight (TOF) information eliminates the cross-talk problem by destroying symmetries in the associated Fisher information matrix.
Abstract: In positron emission tomography (PET) and single photon emission tomography (SPECT), attenuation correction is necessary for quantitative reconstruction of the tracer distribution. Previously, several attempts have been made to estimate the attenuation coefficients from emission data only. These attempts had limited success, because the problem does not have a unique solution, and severe and persistent “cross-talk” between the estimated activity and attenuation distributions was observed. In this paper, we show that the availability of time-of-flight (TOF) information eliminates the cross-talk problem by destroying symmetries in the associated Fisher information matrix. We propose a maximum-a-posteriori reconstruction algorithm for jointly estimating the attenuation and activity distributions from TOF PET data. The performance of the algorithm is studied with 2-D simulations, and further illustrated with phantom experiments and with a patient scan. The estimated attenuation image is robust to noise, and does not suffer from the cross-talk that was observed in non-TOF PET. However, some constraining is still mandatory, because the TOF data determine the attenuation sinogram only up to a constant offset.

Journal ArticleDOI
TL;DR: An accurate model-based inversion algorithm for 3-D optoacoustic image reconstruction is proposed and validated and superior performance versus commonly-used backprojection inversion algorithms is showcased by numerical simulations and phantom experiments.
Abstract: In many practical optoacoustic imaging implementations, dimensionality of the tomographic problem is commonly reduced into two dimensions or 1-D scanning geometries in order to simplify technical implementation, improve imaging speed or increase signal-to-noise ratio. However, this usually comes at a cost of significantly reduced quality of the tomographic data, out-of-plane image artifacts, and overall loss of image contrast and spatial resolution. Quantitative optoacoustic image reconstruction implies therefore collection of point 3-D (volumetric) data from as many locations around the object as possible. Here, we propose and validate an accurate model-based inversion algorithm for 3-D optoacoustic image reconstruction. Superior performance versus commonly-used backprojection inversion algorithms is showcased by numerical simulations and phantom experiments.

Journal ArticleDOI
TL;DR: This statistical atlas can help to improve the computational models used for radio-frequency ablation, cardiac resynchronization therapy, surgical ventricular restoration, or diagnosis and followups of heart diseases due to fiber architecture anomalies.
Abstract: Cardiac fibers, as well as their local arrangement in laminar sheets, have a complex spatial variation of their orientation that has an important role in mechanical and electrical cardiac functions. In this paper, a statistical atlas of this cardiac fiber architecture is built for the first time using human datasets. This atlas provides an average description of the human cardiac fiber architecture along with its variability within the population. In this study, the population is composed of ten healthy human hearts whose cardiac fiber architecture is imaged ex vivo with DT-MRI acquisitions. The atlas construction is based on a computational framework that minimizes user interactions and combines most recent advances in image analysis: graph cuts for segmentation, symmetric log-domain diffeomorphic demons for registration, and log-Euclidean metric for diffusion tensor processing and statistical analysis. Results show that the helix angle of the average fiber orientation is highly correlated to the transmural depth and ranges from -41° on the epicardium to +66° on the endocardium. Moreover, we find that the fiber orientation dispersion across the population (13°) is lower than for the laminar sheets (31°). This study, based on human hearts, extends previous studies on other mammals with concurring conclusions and provides a description of the cardiac fiber architecture more specific to human and better suited for clinical applications. Indeed, this statistical atlas can help to improve the computational models used for radio-frequency ablation, cardiac resynchronization therapy, surgical ventricular restoration, or diagnosis and followups of heart diseases due to fiber architecture anomalies.

Journal ArticleDOI
TL;DR: The new graph cut-graph search method significantly outperformed both the traditional graph cut and traditional graph search approaches and has the potential to improve clinical management of patients with choroidal neovascularization due to exudative age-related macular degeneration.
Abstract: An automated method is reported for segmenting 3-D fluid-associated abnormalities in the retina, so-called symptomatic exudate-associated derangements (SEAD), from 3-D OCT retinal images of subjects suffering from exudative age-related macular degeneration. In the first stage of a two-stage approach, retinal layers are segmented, candidate SEAD regions identified, and the retinal OCT image is flattened using a candidate-SEAD aware approach. In the second stage, a probability constrained combined graph search-graph cut method refines the candidate SEADs by integrating the candidate volumes into the graph cut cost function as probability constraints. The proposed method was evaluated on 15 spectral domain OCT images from 15 subjects undergoing intravitreal anti-VEGF injection treatment. Leave-one-out evaluation resulted in a true positive volume fraction (TPVF), false positive volume fraction (FPVF) and relative volume difference ratio (RVDR) of 86.5%, 1.7%, and 12.8%, respectively. The new graph cut-graph search method significantly outperformed both the traditional graph cut and traditional graph search approaches (p <; 0.01, p <; 0.04) and has the potential to improve clinical management of patients with choroidal neovascularization due to exudative age-related macular degeneration.

Journal ArticleDOI
TL;DR: A novel robust active shape model (RASM) matching method is utilized to roughly segment the outline of the lungs through an optimal surface finding approach, which delivered statistically significant better segmentation results, compared to two commercially available lung segmentation approaches.
Abstract: Segmentation of lungs with (large) lung cancer regions is a nontrivial problem. We present a new fully automated approach for segmentation of lungs with such high-density pathologies. Our method consists of two main processing steps. First, a novel robust active shape model (RASM) matching method is utilized to roughly segment the outline of the lungs. The initial position of the RASM is found by means of a rib cage detection method. Second, an optimal surface finding approach is utilized to further adapt the initial segmentation result to the lung. Left and right lungs are segmented individually. An evaluation on 30 data sets with 40 abnormal (lung cancer) and 20 normal left/right lungs resulted in an average Dice coefficient of 0.975±0.006 and a mean absolute surface distance error of 0.84±0.23 mm, respectively. Experiments on the same 30 data sets showed that our methods delivered statistically significant better segmentation results, compared to two commercially available lung segmentation approaches. In addition, our RASM approach is generally applicable and suitable for large shape models.

Journal ArticleDOI
TL;DR: Shear wave imaging (SWI) is proposed and developed, which is an echocardiography-based, noninvasive, real-time, and easy-to-use technique, to map myofiber orientation in vitro and in vivo and succeeded in mapping the transmural fiber orientation in three beating ovine hearts in vivo.
Abstract: The assessment of disrupted myocardial fiber arrangement may help to understand and diagnose hypertrophic or ischemic cardiomyopathy We hereby proposed and developed shear wave imaging (SWI), which is an echocardiography-based, noninvasive, real-time, and easy-to-use technique, to map myofiber orientation Five in vitro porcine and three in vivo open-chest ovine hearts were studied Known in physics, shear wave propagates faster along than across the fiber direction SWI is a technique that can generate shear waves travelling in different directions with respect to each myocardial layer SWI further analyzed the shear wave velocity across the entire left-ventricular (LV) myocardial thickness, ranging between 10 (diastole) and 25 mm (systole), with a resolution of 02 mm in the middle segment of the LV anterior wall region The fiber angle at each myocardial layer was thus estimated by finding the maximum shear wave speed In the in vitro porcine myocardium (n = 5), the SWI-estimated fiber angles gradually changed from +800 ± 7° (endocardium) to +30° ± 13° (midwall) and -40° ± 10° (epicardium) with 0° aligning with the circumference of the heart This transmural fiber orientation was well correlated with histology findings (r2 - 091 ± 002, p <; 00001) SWI further succeeded in mapping the transmural fiber orientation in three beating ovine hearts in vivo At midsystole, the average fiber orientation exhibited 71° ± 13° (endocardium), 27° ± 8° (midwall), and - 26° ± 30° (epicardium) We demonstrated the capability of SWI in mapping myocardial fiber orientation in vitro and in vivo SWI may serve as a new tool for the noninvasive characterization of myocardial fiber structure

Journal ArticleDOI
TL;DR: The 2-D x-space signal equation,2-D image equation, and the concept of signal fading and resolution loss for a projection MPI imager are introduced and the theoretically predicted x- space spatial resolution is confirmed.
Abstract: Projection magnetic particle imaging (MPI) can improve imaging speed by over 100-fold over traditional 3-D MPI. In this work, we derive the 2-D x-space signal equation, 2-D image equation, and introduce the concept of signal fading and resolution loss for a projection MPI imager. We then describe the design and construction of an x-space projection MPI scanner with a field gradient of 2.35 T/m across a 10 cm magnet free bore. The system has an expected resolution of 3.5 × 8.0 mm using Resovist tracer, and an experimental resolution of 3.8 × 8.4 mm resolution. The system images 2.5 cm × 5.0 cm partial field-of views (FOVs) at 10 frames/s, and acquires a full field-of-view of 10 cm × 5.0 cm in 4 s. We conclude by imaging a resolution phantom, a complex “Cal” phantom, mice injected with Resovist tracer, and experimentally confirm the theoretically predicted x-space spatial resolution.

Journal ArticleDOI
TL;DR: This contribution uses the PET-SORTEO Monte Carlo simulator to evaluate the quantitative accuracy reached by three different anatomical priors when reconstructing positron emission tomography (PET) brain images, using volumetric magnetic resonance imaging (MRI) to provide the anatomical information.
Abstract: In emission tomography, image reconstruction and therefore also tracer development and diagnosis may benefit from the use of anatomical side information obtained with other imaging modalities in the same subject, as it helps to correct for the partial volume effect. One way to implement this, is to use the anatomical image for defining the a priori distribution in a maximum-a-posteriori (MAP) reconstruction algorithm. In this contribution, we use the PET-SORTEO Monte Carlo simulator to evaluate the quantitative accuracy reached by three different anatomical priors when reconstructing positron emission tomography (PET) brain images, using volumetric magnetic resonance imaging (MRI) to provide the anatomical information. The priors are: 1) a prior especially developed for FDG PET brain imaging, which relies on a segmentation of the MR-image (Baete , 2004); 2) the joint entropy-prior (Nuyts, 2007); 3) a prior that encourages smoothness within a position dependent neighborhood, computed from the MR-image. The latter prior was recently proposed by our group in (Vunckx and Nuyts, 2010), and was based on the prior presented by Bowsher (2004). The two latter priors do not rely on an explicit segmentation, which makes them more generally applicable than a segmentation-based prior. All three priors produced a compromise between noise and bias that was clearly better than that obtained with postsmoothed maximum likelihood expectation maximization (MLEM) or MAP with a relative difference prior. The performance of the joint entropy prior was slightly worse than that of the other two priors. The performance of the segmentation-based prior is quite sensitive to the accuracy of the segmentation. In contrast to the joint entropy-prior, the Bowsher-prior is easily tuned and does not suffer from convergence problems.

Journal ArticleDOI
TL;DR: This work presents a detailed analysis of a measured SF to give experimental evidence that3-D MPI encodes information using a set of 3-D spatial patterns or basis functions that is stored in the SF.
Abstract: Magnetic particle imaging (MPI) is a new tomographic imaging approach that can quantitatively map magnetic nanoparticle distributions in vivo. It is capable of volumetric real-time imaging at particle concentrations low enough to enable clinical applications. For image reconstruction in 3-D MPI, a system function (SF) is used, which describes the relation between the acquired MPI signal and the spatial origin of the signal. The SF depends on the instrumental configuration, the applied field sequence, and the magnetic particle characteristics. Its properties reflect the quality of the spatial encoding process. This work presents a detailed analysis of a measured SF to give experimental evidence that 3-D MPI encodes information using a set of 3-D spatial patterns or basis functions that is stored in the SF. This resembles filling 3-D k-space in magnetic resonance imaging, but is faster since all information is gathered simultaneously over a broad acquisition bandwidth. A frequency domain analysis shows that the finest structures that can be encoded with the presented SF are as small as 0.6 mm. SF simulations are performed to demonstrate that larger particle cores extend the set of basis functions towards higher resolution and that the experimentally observed spatial patterns require the existence of particles with core sizes of about 30 nm in the calibration sample. A simple formula is presented that qualitatively describes the basis functions to be expected at a certain frequency.

Journal ArticleDOI
TL;DR: A novel computer-aided diagnosis technique for the early diagnosis of the Alzheimer's disease (AD) based on nonnegative matrix factorization (NMF) and support vector machines (SVM) with bounds of confidence with up to 91% classification accuracy with high sensitivity and specificity rates.
Abstract: This paper presents a novel computer-aided diagnosis (CAD) technique for the early diagnosis of the Alzheimer's disease (AD) based on nonnegative matrix factorization (NMF) and support vector machines (SVM) with bounds of confidence. The CAD tool is designed for the study and classification of functional brain images. For this purpose, two different brain image databases are selected: a single photon emission computed tomography (SPECT) database and positron emission tomography (PET) images, both of them containing data for both Alzheimer's disease (AD) patients and healthy controls as a reference. These databases are analyzed by applying the Fisher discriminant ratio (FDR) and nonnegative matrix factorization (NMF) for feature selection and extraction of the most relevant features. The resulting NMF-transformed sets of data, which contain a reduced number of features, are classified by means of a SVM-based classifier with bounds of confidence for decision. The proposed NMF-SVM method yields up to 91% classification accuracy with high sensitivity and specificity rates (upper than 90%). This NMF-SVM CAD tool becomes an accurate method for SPECT and PET AD image classification.

Journal ArticleDOI
TL;DR: A two-stage methodology for the detection and classification of DME severity from color fundus images is proposed and the effectiveness of the proposed solution is established.
Abstract: Diabetic macular edema (DME) is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. In this paper, a two-stage methodology for the detection and classification of DME severity from color fundus images is proposed. DME detection is carried out via a supervised learning approach using the normal fundus images. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images. Disease severity is assessed using a rotational asymmetry metric by examining the symmetry of macular region. The performance of the proposed methodology and features are evaluated against several publicly available datasets. The detection performance has a sensitivity of 100% with specificity between 74% and 90%. Cases needing immediate referral are detected with a sensitivity of 100% and specificity of 97%. The severity classification accuracy is 81% for the moderate case and 100% for severe cases. These results establish the effectiveness of the proposed solution.

Journal ArticleDOI
TL;DR: A patch-based regularization for iterative image reconstruction that uses neighborhood patches instead of individual pixels in computing the nonquadratic penalty is presented, which can achieve higher contrast recovery for small objects without increasing background variation compared with the quadratic regularization.
Abstract: Iterative image reconstruction for positron emission tomography (PET) can improve image quality by using spatial regularization that penalizes image intensity difference between neighboring pixels. The most commonly used quadratic penalty often oversmoothes edges and fine features in reconstructed images. Nonquadratic penalties can preserve edges but often introduce piece-wise constant blocky artifacts and the results are also sensitive to the hyper-parameter that controls the shape of the penalty function. This paper presents a patch-based regularization for iterative image reconstruction that uses neighborhood patches instead of individual pixels in computing the nonquadratic penalty. The new regularization is more robust than the conventional pixel-based regularization in differentiating sharp edges from random fluctuations due to noise. An optimization transfer algorithm is developed for the penalized maximum likelihood estimation. Each iteration of the algorithm can be implemented in three simple steps: an EM-like image update, an image smoothing and a pixel-by-pixel image fusion. Computer simulations show that the proposed patch-based regularization can achieve higher contrast recovery for small objects without increasing background variation compared with the quadratic regularization. The reconstruction is also more robust to the hyper-parameter than conventional pixel-based nonquadratic regularizations. The proposed regularization method has been applied to real 3-D PET data.

Journal ArticleDOI
TL;DR: A quantitative relationship is proposed here to relate water diffusion anisotropy measurements directly to characteristics of neuronal morphology, and excellent agreement with the theoretical results, as well as agreement with previously published values for locally-inducedWater diffusion an isotropy and volume fraction of the neuropil, is observed.
Abstract: As neurons of the developing brain form functional circuits, they undergo morphological differentiation. In immature cerebral cortex, radially-oriented cellular processes of undifferentiated neurons impede water diffusion parallel, but not perpendicular, to the pial surface, as measured via diffusion-weighted magnetic resonance imaging, and give rise to water diffusion anisotropy. As the cerebral cortex matures, the loss of water diffusion anisotropy accompanies cellular morphological differentiation. A quantitative relationship is proposed here to relate water diffusion anisotropy measurements directly to characteristics of neuronal morphology. This expression incorporates the effects of local diffusion anisotropy within cellular processes, as well as the effects of anisotropy in the orientations of cellular processes. To obtain experimental support for the proposed relationship, tissue from 13 and 31 day-old ferrets was stained using the rapid Golgi technique, and the 3-D orientation distribution of neuronal proceses was characterized using confocal microscopic examination of reflected visible light images. Coregistration of the MRI and Golgi data enables a quantitative evaluation of the proposed theory, and excellent agreement with the theoretical results, as well as agreement with previously published values for locally-induced water diffusion anisotropy and volume fraction of the neuropil, is observed.

Journal ArticleDOI
TL;DR: A novel approach to motion correction based on dual gating and mass-preserving hyperelastic image registration is presented, which accounts for intensity modulations caused by the highly nonrigid cardiac motion.
Abstract: Respiratory and cardiac motion leads to image degradation in positron emission tomography (PET) studies of the human heart. In this paper we present a novel approach to motion correction based on dual gating and mass-preserving hyperelastic image registration. Thereby, we account for intensity modulations caused by the highly nonrigid cardiac motion. This leads to accurate and realistic motion estimates which are quantitatively validated on software phantom data and carried over to clinically relevant data using a hardware phantom. For patient data, the proposed method is first evaluated in a high statistic (20 min scans) dual gating study of 21 patients. It is shown that the proposed approach properly corrects PET images for dual-cardiac as well as respiratory-motion. In a second study the list mode data of the same patients is cropped to a scan time reasonable for clinical practice (3 min). This low statistic study not only shows the clinical applicability of our method but also demonstrates its robustness against noise obtained by hyperelastic regularization.

Journal ArticleDOI
TL;DR: An updated and critical review of cardiac motion tracking methods including major references and those proposed in the past ten years is provided and can serve as a tutorial for new researchers entering the field.
Abstract: Magnetic resonance imaging (MRI) is a highly advanced and sophisticated imaging modality for cardiac motion tracking and analysis, capable of providing 3D analysis of global and regional cardiac function with great accuracy and reproducibility. In the past few years, numerous efforts have been devoted to cardiac motion recovery and deformation analysis from MR image sequences. Many approaches have been proposed for tracking cardiac motion and for computing deformation parameters and mechanical properties of the heart from a variety of cardiac MR imaging techniques. In this paper, an updated and critical review of cardiac motion tracking methods including major references and those proposed in the past ten years is provided. The MR imaging and analysis techniques surveyed are based on cine MRI, tagged MRI, phase contrast MRI, DENSE, and SENC. This paper can serve as a tutorial for new researchers entering the field.