scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 2011"


Journal ArticleDOI
TL;DR: Dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor using the proposed adaptive dictionary as compared to previous CS methods are demonstrated.
Abstract: Compressed sensing (CS) utilizes the sparsity of magnetic resonance (MR) images to enable accurate reconstruction from undersampled k-space data. Recent CS methods have employed analytical sparsifying transforms such as wavelets, curvelets, and finite differences. In this paper, we propose a novel framework for adaptively learning the sparsifying transform (dictionary), and reconstructing the image simultaneously from highly undersampled k-space data. The sparsity in this framework is enforced on overlapping image patches emphasizing local structure. Moreover, the dictionary is adapted to the particular image instance thereby favoring better sparsities and consequently much higher undersampling rates. The proposed alternating reconstruction algorithm learns the sparsifying dictionary, and uses it to remove aliasing and noise in one step, and subsequently restores and fills-in the k-space data in the other step. Numerical experiments are conducted on MR images and on real MR data of several anatomies with a variety of sampling schemes. The results demonstrate dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor using the proposed adaptive dictionary as compared to previous CS methods. These improvements persist over a wide range of practical data signal-to-noise ratios, without any parameter tuning.

1,015 citations


Journal ArticleDOI
TL;DR: A neural network scheme for pixel classification and computes a 7-D vector composed of gray-level and moment invariants-based features for pixel representation that is suitable for retinal image computer analyses such as automated screening for early diabetic retinopathy detection.
Abstract: This paper presents a new supervised method for blood vessel detection in digital retinal images. This method uses a neural network (NN) scheme for pixel classification and computes a 7-D vector composed of gray-level and moment invariants-based features for pixel representation. The method was evaluated on the publicly available DRIVE and STARE databases, widely used for this purpose, since they contain retinal images where the vascular structure has been precisely marked by experts. Method performance on both sets of test images is better than other existing solutions in literature. The method proves especially accurate for vessel detection in STARE images. Its application to this database (even when the NN was trained on the DRIVE database) outperforms all analyzed segmentation approaches. Its effectiveness and robustness with different image conditions, together with its simplicity and fast implementation, make this blood vessel segmentation proposal suitable for retinal image computer analyses such as automated screening for early diabetic retinopathy detection.

913 citations


Journal ArticleDOI
TL;DR: A novel algorithm to reconstruct dynamic magnetic resonance imaging data from under-sampled k-t space data using the compact representation of the data in the Karhunen Louve transform (KLT) domain to exploit the correlations in the dataset.
Abstract: We introduce a novel algorithm to reconstruct dynamic magnetic resonance imaging (MRI) data from under-sampled k-t space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) domain to exploit the correlations in the dataset. The use of the data-dependent KL transform makes our approach ideally suited to a range of dynamic imaging problems, even when the motion is not periodic. In comparison to current KLT-based methods that rely on a two-step approach to first estimate the basis functions and then use it for reconstruction, we pose the problem as a spectrally regularized matrix recovery problem. By simultaneously determining the temporal basis functions and its spatial weights from the entire measured data, the proposed scheme is capable of providing high quality reconstructions at a range of accelerations. In addition to using the compact representation in the KLT domain, we also exploit the sparsity of the data to further improve the recovery rate. Validations using numerical phantoms and in vivo cardiac perfusion MRI data demonstrate the significant improvement in performance offered by the proposed scheme over existing methods.

646 citations


Journal ArticleDOI
TL;DR: This paper introduces a robust, learning-based brain extraction system (ROBEX), which combines a discriminative and a generative model to achieve the final result and shows that ROBEX provides significantly improved performance measures for almost every method/dataset combination.
Abstract: Automatic whole-brain extraction from magnetic resonance images (MRI), also known as skull stripping, is a key component in most neuroimage pipelines. As the first element in the chain, its robustness is critical for the overall performance of the system. Many skull stripping methods have been proposed, but the problem is not considered to be completely solved yet. Many systems in the literature have good performance on certain datasets (mostly the datasets they were trained/tuned on), but fail to produce satisfactory results when the acquisition conditions or study populations are different. In this paper we introduce a robust, learning-based brain extraction system (ROBEX). The method combines a discriminative and a generative model to achieve the final result. The discriminative model is a Random Forest classifier trained to detect the brain boundary; the generative model is a point distribution model that ensures that the result is plausible. When a new image is presented to the system, the generative model is explored to find the contour with highest likelihood according to the discriminative model. Because the target shape is in general not perfectly represented by the generative model, the contour is refined using graph cuts to obtain the final segmentation. Both models were trained using 92 scans from a proprietary dataset but they achieve a high degree of robustness on a variety of other datasets. ROBEX was compared with six other popular, publicly available methods (BET, BSE, FreeSurfer, AFNI, BridgeBurner, and GCUT) on three publicly available datasets (IBSR, LPBA40, and OASIS, 137 scans in total) that include a wide range of acquisition hardware and a highly variable population (different age groups, healthy/diseased). The results show that ROBEX provides significantly improved performance measures for almost every method/dataset combination.

539 citations


Journal ArticleDOI
TL;DR: The organization of the challenge, the data and evaluation methods and the outcome of the initial launch with 20 algorithms, which comprised the comprehensive evaluation and comparison of 20 individual algorithms from leading academic and industrial research groups are detailed.
Abstract: EMPIRE10 (Evaluation of Methods for Pulmonary Image REgistration 2010) is a public platform for fair and meaningful comparison of registration algorithms which are applied to a database of intra patient thoracic CT image pairs. Evaluation of nonrigid registration techniques is a nontrivial task. This is compounded by the fact that researchers typically test only on their own data, which varies widely. For this reason, reliable assessment and comparison of different registration algorithms has been virtually impossible in the past. In this work we present the results of the launch phase of EMPIRE10, which comprised the comprehensive evaluation and comparison of 20 individual algorithms from leading academic and industrial research groups. All algorithms are applied to the same set of 30 thoracic CT pairs. Algorithm settings and parameters are chosen by researchers expert in the con figuration of their own method and the evaluation is independent, using the same criteria for all participants. All results are published on the EMPIRE10 website (http://empire10.isi.uu.nl). The challenge remains ongoing and open to new participants. Full results from 24 algorithms have been published at the time of writing. This paper details the organization of the challenge, the data and evaluation methods and the outcome of the initial launch with 20 algorithms. The gain in knowledge and future work are discussed.

436 citations


Journal ArticleDOI
TL;DR: An automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images and a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts are presented.
Abstract: Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Glaucoma is one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. In this paper, we present an automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images. A novel OD segmentation method is proposed which integrates the local image information around each point of interest in multidimensional feature space to provide robustness against variations found in and around the OD region. We also propose a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A multi-stage strategy is employed to derive a reliable subset of vessel bends called r-bends followed by a local spline fitting to derive the desired cup boundary. The method has been evaluated on 138 images comprising 33 normal and 105 glaucomatous images against three glaucoma experts. The obtained segmentation results show consistency in handling various geometric and photometric variations found across the dataset. The estimation error of the method for vertical cup-to-disk diameter ratio is 0.09/0.08 (mean/standard deviation) while for cup-to-disk area ratio it is 0.12/0.10. Overall, the obtained qualitative and quantitative results show effectiveness in both segmentation and subsequent OD parameterization for glaucoma assessment.

412 citations


Journal ArticleDOI
TL;DR: Experiments on simulated and in vivo magnetic resonance images show that the proposed patch-based image labeling method relying on a label propagation framework is very successful in providing automated human brain labeling.
Abstract: We propose in this work a patch-based image labeling method relying on a label propagation framework. Based on image intensity similarities between the input image and an anatomy textbook, an original strategy which does not require any nonrigid registration is presented. Following recent developments in nonlocal image denoising, the similarity between images is represented by a weighted graph computed from an intensity-based distance between patches. Experiments on simulated and in vivo magnetic resonance images show that the proposed method is very successful in providing automated human brain labeling.

304 citations


Journal ArticleDOI
TL;DR: It is demonstrated both theoretically and experimentally that multidimensional MPI is a linear shift-invariant imaging system with an analytic point spread function and a fast image reconstruction method that obtains the intrinsic MPI image with high signal-to-noise ratio via a simple gridding operation in x-space.
Abstract: Magnetic particle imaging (MPI) is a promising new medical imaging tracer modality with potential applications in human angiography, cancer imaging, in vivo cell tracking, and inflammation imaging. Here we demonstrate both theoretically and experimentally that multidimensional MPI is a linear shift-invariant imaging system with an analytic point spread function. We also introduce a fast image reconstruction method that obtains the intrinsic MPI image with high signal-to-noise ratio via a simple gridding operation in x-space. We also demonstrate a method to reconstruct large field-of-view (FOV) images using partial FOV scanning, despite the loss of first harmonic image information due to direct feedthrough contamination. We conclude with the first experimental test of multidimensional x-space MPI.

264 citations


Journal ArticleDOI
TL;DR: This paper investigates the use of random encoding for CS-MRI, in an effort to emulate the “universal” encoding schemes suggested by the theoretical CS literature, and results indicate that random encoding has the potential to outperform conventional encoding in certain scenarios.
Abstract: Compressed sensing (CS) has the potential to reduce magnetic resonance (MR) data acquisition time. In order for CS-based imaging schemes to be effective, the signal of interest should be sparse or compressible in a known representation, and the measurement scheme should have good mathematical properties with respect to this representation. While MR images are often compressible, the second requirement is often only weakly satisfied with respect to commonly used Fourier encoding schemes. This paper investigates the use of random encoding for CS-MRI, in an effort to emulate the “universal” encoding schemes suggested by the theoretical CS literature. This random encoding is achieved experimentally with tailored spatially-selective radio-frequency (RF) pulses. Both simulation and experimental studies were conducted to investigate the imaging properties of this new scheme with respect to Fourier schemes. Results indicate that random encoding has the potential to outperform conventional encoding in certain scenarios. However, our study also indicates that random encoding fails to satisfy theoretical sufficient conditions for stable and accurate CS reconstruction in many scenarios of interest. Therefore, there is still no general theoretical performance guarantee for CS-MRI, with or without random encoding, and CS-based methods should be developed and validated carefully in the context of specific applications.

253 citations


Journal ArticleDOI
TL;DR: Novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data-SENSE-reconstruction-using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems are presented.
Abstract: Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data-SENSE-reconstruction-using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., -norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.

244 citations


Journal ArticleDOI
TL;DR: An integrated framework consisting of a novel supervised cell-image segmentation algorithm and a new touching-cell splitting method for quantitative analysis of histopathological images, which achieves better results than the other compared methods.
Abstract: For quantitative analysis of histopathological images, such as the lymphoma grading systems, quantification of features is usually carried out on single cells before categorizing them by classification algorithms. To this end, we propose an integrated framework consisting of a novel supervised cell-image segmentation algorithm and a new touching-cell splitting method. For the segmentation part, we segment the cell regions from the other areas by classifying the image pixels into either cell or extra-cellular category. Instead of using pixel color intensities, the color-texture extracted at the local neighborhood of each pixel is utilized as the input to our classification algorithm. The color-texture at each pixel is extracted by local Fourier transform (LFT) from a new color space, the most discriminant color space (MDC). The MDC color space is optimized to be a linear combination of the original RGB color space so that the extracted LFT texture features in the MDC color space can achieve most discrimination in terms of classification (segmentation) performance. To speed up the texture feature extraction process, we develop an efficient LFT extraction algorithm based on image shifting and image integral. For the splitting part, given a connected component of the segmentation map, we initially differentiate whether it is a touching-cell clump or a single nontouching cell. The differentiation is mainly based on the distance between the most likely radial-symmetry center and the geometrical center of the connected component. The boundaries of touching-cell clumps are smoothed out by Fourier shape descriptor before carrying out an iterative, concave-point and radial-symmetry based splitting algorithm. To test the validity, effectiveness and efficiency of the framework, it is applied to follicular lymphoma pathological images, which exhibit complex background and extracellular texture with nonuniform illumination condition. For comparison purposes, the results of the proposed segmentation algorithm are evaluated against the outputs of superpixel, graph-cut, mean-shift, and two state-of-the-art pathological image segmentation methods using ground-truth that was established by manual segmentation of cells in the original images. Our segmentation algorithm achieves better results than the other compared methods. The results of splitting are evaluated in terms of under-splitting, over-splitting, and encroachment errors. By summing up the three types of errors, we achieve a total error rate of 5.25% per image.

Journal ArticleDOI
TL;DR: This paper introduces two real-time elastography techniques based on analytic minimization (AM) of regularized cost functions that produce axial strain and integer lateral displacement, while the second method produces both axial and lateral strains.
Abstract: This paper introduces two real-time elastography techniques based on analytic minimization (AM) of regularized cost functions. The first method (1D AM) produces axial strain and integer lateral displacement, while the second method (2D AM) produces both axial and lateral strains. The cost functions incorporate similarity of radio-frequency (RF) data intensity and displacement continuity, making both AM methods robust to small decorrelations present throughout the image. We also exploit techniques from robust statistics to make the methods resistant to large local decorrelations. We further introduce Kalman filtering for calculating the strain field from the displacement field given by the AM methods. Simulation and phantom experiments show that both methods generate strain images with high SNR, CNR and resolution. Both methods work for strains as high as 10% and run in real-time. We also present in vivo patient trials of ablation monitoring. An implementation of the 2D AM method as well as phantom and clinical RF-data can be downloaded.

Journal ArticleDOI
Kangjoo Lee1, Sungho Tak1, Jong Chul Ye1
TL;DR: A new data driven fMRI analysis that is derived solely based upon the sparsity of the signals is proposed that enables estimation of spatially adaptive design matrix as well as sparse signal components that represent synchronous, functionally organized and integrated neural hemodynamics.
Abstract: We propose a novel statistical analysis method for functional magnetic resonance imaging (fMRI) to overcome the drawbacks of conventional data-driven methods such as the independent component analysis (ICA). Although ICA has been broadly applied to fMRI due to its capacity to separate spatially or temporally independent components, the assumption of independence has been challenged by recent studies showing that ICA does not guarantee independence of simultaneously occurring distinct activity patterns in the brain. Instead, sparsity of the signal has been shown to be more promising. This coincides with biological findings such as sparse coding in V1 simple cells, electrophysiological experiment results in the human medial temporal lobe, etc. The main contribution of this paper is, therefore, a new data driven fMRI analysis that is derived solely based upon the sparsity of the signals. A compressed sensing based data-driven sparse generalized linear model is proposed that enables estimation of spatially adaptive design matrix as well as sparse signal components that represent synchronous, functionally organized and integrated neural hemodynamics. Furthermore, a minimum description length (MDL)-based model order selection rule is shown to be essential in selecting unknown sparsity level for sparse dictionary learning. Using simulation and real fMRI experiments, we show that the proposed method can adapt individual variation better compared to the conventional ICA methods.

Journal ArticleDOI
TL;DR: This study presents an efficient image categorization and retrieval system applied to medical image databases, in particular large radiograph archives, and shows an application to pathology-level categorization of chest X-ray data, the most popular examination in radiology.
Abstract: In this study we present an efficient image categorization and retrieval system applied to medical image databases, in particular large radiograph archives. The methodology is based on local patch representation of the image content, using a “bag of visual words” approach. We explore the effects of various parameters on system performance, and show best results using dense sampling of simple features with spatial content, and a nonlinear kernel-based support vector machine (SVM) classifier. In a recent international competition the system was ranked first in discriminating orientation and body regions in X-ray images. In addition to organ-level discrimination, we show an application to pathology-level categorization of chest X-ray data, the most popular examination in radiology. The system discriminates between healthy and pathological cases, and is also shown to successfully identify specific pathologies in a set of chest radiographs taken from a routine hospital examination. This is a first step towards similarity-based categorization, which has a major clinical implications for computer-assisted diagnostics.

Journal ArticleDOI
TL;DR: An accurate and efficient optic disc detection and segmentation technique designed to capture both the circular shape of the OD and the image variation across the OD boundary simultaneously under the framework of computer-aided diagnosis.
Abstract: Under the framework of computer-aided diagnosis, this paper presents an accurate and efficient optic disc (OD) detection and segmentation technique. A circular transformation is designed to capture both the circular shape of the OD and the image variation across the OD boundary simultaneously. For each retinal image pixel, it evaluates the image variation along multiple evenly-oriented radial line segments of specific length. The pixels with the maximum variation along all radial line segments are determined, which can be further exploited to locate both the OD center and the OD boundary accurately. Experiments show that OD detection accuracies of 99.75%, 97.5%, and 98.77% are obtained for the STARE dataset, the ARIA dataset, and the MESSIDOR dataset, respectively, and the OD center error lies around six pixels for the STARE dataset and the ARIA dataset which is much smaller than that of state-of-the-art methods ranging 14-29 pixels. In addition, the OD segmentation accuracies of 93.4% and 91.7% are obtained for STARE dataset and ARIA dataset, respectively, that consists of many severely degraded images of pathological retinas that state-of-the-art methods cannot segment properly. Furthermore, the algorithm runs in 5 s, which is substantially faster than many of the state-of-the-art methods.

Journal ArticleDOI
TL;DR: This work exploits the fact that wavelets can represent magnetic resonance images well, with relatively few coefficients, to improve magnetic resonance imaging (MRI) reconstructions from undersampled data with arbitrary k-space trajectories and proposes a variant that combines recent improvements in convex optimization and that can be tuned to a given specific k- space trajectory.
Abstract: In this work, we exploit the fact that wavelets can represent magnetic resonance images well, with relatively few coefficients. We use this property to improve magnetic resonance imaging (MRI) reconstructions from undersampled data with arbitrary k-space trajectories. Reconstruction is posed as an optimization problem that could be solved with the iterative shrinkage/thresholding algorithm (ISTA) which, unfortunately, converges slowly. To make the approach more practical, we propose a variant that combines recent improvements in convex optimization and that can be tuned to a given specific k-space trajectory. We present a mathematical analysis that explains the performance of the algorithms. Using simulated and in vivo data, we show that our nonlinear method is fast, as it accelerates ISTA by almost two orders of magnitude. We also show that it remains competitive with TV regularization in terms of image quality.

Journal ArticleDOI
TL;DR: An automated method to estimate the AVR in retinal color images by detecting the location of the optic disc, determining an appropriate region of interest (ROI), classifying vessels as arteries or veins, estimating vessel widths, and calculating the A VR is presented.
Abstract: A decreased ratio of the width of retinal arteries to veins [arteriolar-to-venular diameter ratio (AVR)], is well established as predictive of cerebral atrophy, stroke and other cardiovascular events in adults. Tortuous and dilated arteries and veins, as well as decreased AVR are also markers for plus disease in retinopathy of prematurity. This work presents an automated method to estimate the AVR in retinal color images by detecting the location of the optic disc, determining an appropriate region of interest (ROI), classifying vessels as arteries or veins, estimating vessel widths, and calculating the AVR. After vessel segmentation and vessel width determination, the optic disc is located and the system eliminates all vessels outside the AVR measurement ROI. A skeletonization operation is applied to the remaining vessels after which vessel crossings and bifurcation points are removed, leaving a set of vessel segments consisting of only vessel centerline pixels. Features are extracted from each centerline pixel in order to assign these a soft label indicating the likelihood that the pixel is part of a vein. As all centerline pixels in a connected vessel segment should be the same type, the median soft label is assigned to each centerline pixel in the segment. Next, artery vein pairs are matched using an iterative algorithm, and the widths of the vessels are used to calculate the AVR. We trained and tested the algorithm on a set of 65 high resolution digital color fundus photographs using a reference standard that indicates for each major vessel in the image whether it is an artery or vein. We compared the AVR values produced by our system with those determined by a semi-automated reference system. We obtained a mean unsigned error of 0.06 (SD 0.04) in 40 images with a mean AVR of 0.67. A second observer using the semi-automated system obtained the same mean unsigned error of 0.06 (SD 0.05) on the set of images with a mean AVR of 0.66. The testing data and reference standard used in this study has been made publicly available.

Journal ArticleDOI
TL;DR: In this paper, the l1 norm of the image gradient is used as a regularization method for brain decoding, which can be applied to fMRI data for brain mapping and brain decoding.
Abstract: While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional magnetic resonance imaging (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioral variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the l1 norm of the image gradient, also known as its total variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification.

Journal ArticleDOI
TL;DR: The ability of the proposed method to polarize the shear wave generation and propagation along a chosen axis permits the study the local elastic anisotropy of myocardial muscle, which is found to vary with muscle depth.
Abstract: Shear wave imaging was evaluated for the in vivo assessment of myocardial biomechanical properties on ten open chest sheep. The use of dedicated ultrasonic sequences implemented on a very high frame rate ultrasonic scanner (>; 5000 frames per second) enables the estimation of the quantitative shear modulus of myocardium several times during one cardiac cycle. A 128 element probe remotely generates a shear wave thanks to the radiation force induced by a focused ultrasonic burst. The resulting shear wave propagation is tracked using the same probe by cross-correlating successive ultrasonic images acquired at a very high frame rate. The shear wave speed estimated at each location in the ultrasonic image gives access to the local myocardial stiffness (shear modulus μ). The technique was found to be reproducible (standard deviation <; 3%) and able to estimate both systolic and diastolic stiffness on each sheep (respectively μdias ≈ 2 kPa and μsyst ≈ 30 kPa). Moreover, the ability of the proposed method to polarize the shear wave generation and propagation along a chosen axis permits the study the local elastic anisotropy of myocardial muscle. As expected, myocardial elastic anisotropy is found to vary with muscle depth. The real time capabilities and potential of Shear Wave Imaging using ultrafast scanners for cardiac applications is finally illustrated by studying the dynamics of this fractional anisotropy during the cardiac cycle.

Journal ArticleDOI
TL;DR: It is demonstrated that use of the imaging model in an iterative reconstruction method can improve the spatial resolution of the optoacoustic images as compared to those reconstructed assuming point-like ultrasound transducers.
Abstract: Optoacoustic tomography (OAT) is a hybrid imaging modality that combines the advantages of optical and ultrasound imaging. Most existing reconstruction algorithms for OAT assume that the ultrasound transducers employed to record the measurement data are point-like. When transducers with large detecting areas and/or compact measurement geometries are utilized, this assumption can result in conspicuous image blurring and distortions in the reconstructed images. In this work, a new OAT imaging model that incorporates the spatial and temporal responses of an ultrasound transducer is introduced. A discrete form of the imaging model is implemented and its numerical properties are investigated. We demonstrate that use of the imaging model in an iterative reconstruction method can improve the spatial resolution of the optoacoustic images as compared to those reconstructed assuming point-like ultrasound transducers.

Journal ArticleDOI
TL;DR: A semi-automated segmentation algorithm to detect intra-retinal layers in OCT images acquired from rodent models of retinal degeneration is presented, demonstrating the strength of the method to detect the desired retinal layers with sufficient accuracy even in the presence of intensity inhomogeneity resulting from blood vessels.
Abstract: Optical coherence tomography (OCT) is a noninvasive, depth-resolved imaging modality that has become a prominent ophthalmic diagnostic technique. We present a semi-automated segmentation algorithm to detect intra-retinal layers in OCT images acquired from rodent models of retinal degeneration. We adapt Chan-Vese's energy-minimizing active contours without edges for the OCT images, which suffer from low contrast and are highly corrupted by noise. A multiphase framework with a circular shape prior is adopted in order to model the boundaries of retinal layers and estimate the shape parameters using least squares. We use a contextual scheme to balance the weight of different terms in the energy functional. The results from various synthetic experiments and segmentation results on OCT images of rats are presented, demonstrating the strength of our method to detect the desired retinal layers with sufficient accuracy even in the presence of intensity inhomogeneity resulting from blood vessels. Our algorithm achieved an average Dice similarity coefficient of 0.84 over all segmented retinal layers, and of 0.94 for the combined nerve fiber layer, ganglion cell layer, and inner plexiform layer which are the critical layers for glaucomatous degeneration.

Journal ArticleDOI
Andre Salomon1, A. Goedicke1, B Schweizer1, Til Aach1, Volkmar Schulz1 
TL;DR: A generic iterative reconstruction approach to simultaneously estimate the local tracer concentration and the attenuation distribution using the segmented MR image as anatomical reference, which indicates a robust and reliable alternative to other MR-AC approaches targeting patient specific quantitative analysis in time-of-flight PET/MR.
Abstract: Medical investigations targeting a quantitative analysis of the position emission tomography (PET) images require the incorporation of additional knowledge about the photon attenuation distribution in the patient. Today, energy range adapted attenuation maps derived from computer tomography (CT) scans are used to effectively compensate for image quality degrading effects, such as attenuation and scatter. Replacing CT by magnetic resonance (MR) is considered as the next evolutionary step in the field of hybrid imaging systems. However, unlike CT, MR does not measure the photon attenuation and thus does not provide an easy access to this valuable information. Hence, many research groups currently investigate different technologies for MR-based attenuation correction (MR-AC). Typically, these approaches are based on techniques such as special acquisition sequences (alone or in combination with subsequent image processing), anatomical atlas registration, or pattern recognition techniques using a data base of MR and corresponding CT images. We propose a generic iterative reconstruction approach to simultaneously estimate the local tracer concentration and the attenuation distribution using the segmented MR image as anatomical reference. Instead of applying predefined attenuation values to specific anatomical regions or tissue types, the gamma attenuation at 511 keV is determined from the PET emission data. In particular, our approach uses a maximum-likelihood estimation for the activity and a gradient-ascent based algorithm for the attenuation distribution. The adverse effects of scattered and accidental gamma coincidences on the quantitative accuracy of PET, as well as artifacts caused by the inherent crosstalk between activity and attenuation estimation are efficiently reduced using enhanced decay event localization provided by time-of-flight PET, accurate correction for accidental coincidences, and a reduced number of unknown attenuation coefficients. First results achieved with measured whole body PET data and reference segmentation from CT showed an absolute mean difference of 0.005 cm in the lungs, 0.0009 cm in case of fat, and 0.0015 cm for muscles and blood. The proposed method indicates a robust and reliable alternative to other MR-AC approaches targeting patient specific quantitative analysis in time-of-flight PET/MR.

Journal ArticleDOI
TL;DR: An original way to combine the diffusion- and spatial-domain constraints to achieve a maximal reduction in the number of diffusion measurements, while sacrificing little in terms of reconstruction accuracy is described.
Abstract: Despite the relative recency of its inception, the theory of compressive sampling (aka compressed sensing) (CS) has already revolutionized multiple areas of applied sciences, a particularly important instance of which is medical imaging. Specifically, the theory has provided a different perspective on the important problem of optimal sampling in magnetic resonance imaging (MRI), with an ever-increasing body of works reporting stable and accurate reconstruction of MRI scans from the number of spectral measurements which would have been deemed unacceptably small as recently as five years ago. In this paper, the theory of CS is employed to palliate the problem of long acquisition times, which is known to be a major impediment to the clinical application of high angular resolution diffusion imaging (HARDI). Specifically, we demonstrate that a substantial reduction in data acquisition times is possible through minimization of the number of diffusion encoding gradients required for reliable reconstruction of HARDI scans. The success of such a minimization is primarily due to the availability of spherical ridgelet transformation, which excels in sparsifying HARDI signals. What makes the resulting reconstruction procedure even more accurate is a combination of the sparsity constraints in the diffusion domain with additional constraints imposed on the estimated diffusion field in the spatial domain. Accordingly, the present paper describes an original way to combine the diffusion- and spatial-domain constraints to achieve a maximal reduction in the number of diffusion measurements, while sacrificing little in terms of reconstruction accuracy. Finally, details are provided on an efficient numerical scheme which can be used to solve the aforementioned reconstruction problem by means of standard and readily available estimation tools. The paper is concluded with experimental results which support the practical value of the proposed reconstruction methodology.

Journal ArticleDOI
TL;DR: This paper considers the sparse linear regression model with a l1-norm penalty, also known as the least absolute shrinkage and selection operator (LASSO), for estimating sparse brain connectivity, a well-known decoding algorithm in the compressed sensing (CS).
Abstract: Partial correlation is a useful connectivity measure for brain networks, especially, when it is needed to remove the confounding effects in highly correlated networks. Since it is difficult to estimate the exact partial correlation under the small-n large-p situation, a sparseness constraint is generally introduced. In this paper, we consider the sparse linear regression model with a l1-norm penalty, also known as the least absolute shrinkage and selection operator (LASSO), for estimating sparse brain connectivity. LASSO is a well-known decoding algorithm in the compressed sensing (CS). The CS theory states that LASSO can reconstruct the exact sparse signal even from a small set of noisy measurements. We briefly show that the penalized linear regression for partial correlation estimation is related to CS. It opens a new possibility that the proposed framework can be used for a sparse brain network recovery. As an illustration, we construct sparse brain networks of 97 regions of interest (ROIs) obtained from FDG-PET imaging data for the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. As validation, we check the network reproducibilities by leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.

Journal ArticleDOI
TL;DR: An approach to generate a mean motion model of the lung based on thoracic 4D computed tomography data of different patients to extend the motion modeling capabilities of the statistical respiratory motion model and presents two examples of possible applications in radiation therapy and image guided diagnosis.
Abstract: Modeling of respiratory motion has become increasingly important in various applications of medical imaging (e.g., radiation therapy of lung cancer). Current modeling approaches are usually confined to intra-patient registration of 3D image data representing the individual patient's anatomy at different breathing phases. We propose an approach to generate a mean motion model of the lung based on thoracic 4D computed tomography (CT) data of different patients to extend the motion modeling capabilities. Our modeling process consists of three steps: an intra-subject registration to generate subject-specific motion models, the generation of an average shape and intensity atlas of the lung as anatomical reference frame, and the registration of the subject-specific motion models to the atlas in order to build a statistical 4D mean motion model (4D-MMM). Furthermore, we present methods to adapt the 4D mean motion model to a patient-specific lung geometry. In all steps, a symmetric diffeomorphic nonlinear intensity-based registration method was employed. The Log-Euclidean framework was used to compute statistics on the diffeomorphic transformations. The presented methods are then used to build a mean motion model of respiratory lung motion using thoracic 4D CT data sets of 17 patients. We evaluate the model by applying it for estimating respiratory motion of ten lung cancer patients. The prediction is evaluated with respect to landmark and tumor motion, and the quantitative analysis results in a mean target registration error (TRE) of 3.3 ±1.6 mm if lung dynamics are not impaired by large lung tumors or other lung disorders (e.g., emphysema). With regard to lung tumor motion, we show that prediction accuracy is independent of tumor size and tumor motion amplitude in the considered data set. However, tumors adhering to non-lung structures degrade local lung dynamics significantly and the model-based prediction accuracy is lower in these cases. The statistical respiratory motion model is capable of providing valuable prior knowledge in many fields of applications. We present two examples of possible applications in radiation therapy and image guided diagnosis.

Journal ArticleDOI
TL;DR: The results show that the ASD-POCS algorithm can yield images with quality comparable to that obtained with existing algorithms, while using one-sixth to one quarter of the 361-view data currently used in typical micro-CT specimen imaging.
Abstract: Micro-computed tomography (micro-CT) is an important tool in biomedical research and preclinical applications that can provide visual inspection of and quantitative information about imaged small animals and biological samples such as vasculature specimens. Currently, micro-CT imaging uses projection data acquired at a large number (300-1000) of views, which can limit system throughput and potentially degrade image quality due to radiation-induced deformation or damage to the small animal or specimen. In this work, we have investigated low-dose micro-CT and its application to specimen imaging from substantially reduced projection data by using a recently developed algorithm, referred to as the adaptive-steepest-descent-projection-onto-convex-sets (ASD-POCS) algorithm, which reconstructs an image through minimizing the image total-variation and enforcing data constraints. To validate and evaluate the performance of the ASD-POCS algorithm, we carried out quantitative evaluation studies in a number of tasks of practical interest in imaging of specimens of real animal organs. The results show that the ASD-POCS algorithm can yield images with quality comparable to that obtained with existing algorithms, while using one-sixth to one quarter of the 361-view data currently used in typical micro-CT specimen imaging.

Journal ArticleDOI
TL;DR: A method for automatically detecting new vessels on the optic disc using retinal photography and finding 14 features found to be effective and used in the final test may be sufficient for it to play a useful clinical role in an automated retinopathy analysis system.
Abstract: Proliferative diabetic retinopathyis a rare condition likely to lead to severe visual impairment. It is characterized by the development of abnormal new retinal vessels. We describe a method for automatically detecting new vessels on the optic disc using retinal photography. Vessel-like candidate segments are first detected using a method based on watershed lines and ridge strength measurement. Fifteen feature parameters, associated with shape, position, orientation, brightness, contrast and line density are calculated for each candidate segment. Based on these features, each segment is categorized as normal or abnormal using a support vector machine (SVM) classifier. The system was trained and tested by cross-validation using 38 images with new vessels and 71 normal images from two diabetic retinal screening centers and one hospital eye clinic. The discrimination performance of the fifteen features was tested against a clinical reference standard. Fourteen features were found to be effective and used in the final test. The area under the receiver operator characteristic curve was 0.911 for detecting images with new vessels on the disc. This accuracy may be sufficient for it to play a useful clinical role in an automated retinopathy analysis system.

Journal ArticleDOI
TL;DR: Diffusion parametric images obtained from five datasets of four patients were compared with histology data and the resulting receiver operating characteristic was superior to that of any perfusion-related parameter proposed in the literature.
Abstract: Prostate cancer is the most prevalent form of cancer in western men. An accurate early localization of prostate cancer, permitting efficient use of modern focal therapies, is currently hampered by a lack of imaging methods. Several methods have aimed at detecting microvascular changes associated with prostate cancer with limited success by quantitative imaging of blood perfusion. Differently, we propose contrast-ultrasound diffusion imaging, based on the hypothesis that the complexity of microvascular changes is better reflected by diffusion than by perfusion characteristics. Quantification of local, intravascular diffusion is performed after transrectal ultrasound imaging of an intravenously injected ultrasound contrast agent bolus. Indicator dilution curves are measured with the ultrasound scanner resolution and fitted by a modified local density random walk model, which, being a solution of the convective diffusion equation, enables the estimation of a local, diffusion-related parameter. Diffusion parametric images obtained from five datasets of four patients were compared with histology data on a pixel basis. The resulting receiver operating characteristic (curve area = 0.91) was superior to that of any perfusion-related parameter proposed in the literature. Contrast-ultrasound diffusion imaging seems therefore to be a promising method for prostate cancer localization, encouraging further research to assess the clinical reliability.

Journal ArticleDOI
TL;DR: This paper adapts the so-called approximation error approach to compensate for the modeling errors caused by inaccurately known body shape, and shows that recovery from simultaneous discretization related errors is feasible, allowing the use of computationally efficient reduced order models.
Abstract: Electrical impedance tomography is a highly unstable problem with respect to measurement and modeling errors. This instability is especially severe when absolute imaging is considered. With clinical measurements, accurate knowledge about the body shape is usually not available, and therefore an approximate model domain has to be used in the computational model. It has earlier been shown that large reconstruction artefacts result if the geometry of the model domain is incorrect. In this paper, we adapt the so-called approximation error approach to compensate for the modeling errors caused by inaccurately known body shape. This approach has previously been shown to be applicable to a variety of modeling errors, such as coarse discretization in the numerical approximation of the forward model and domain truncation. We evaluate the approach with a simulated example of thorax imaging, and also with experimental data from a laboratory setting, with absolute imaging considered in both cases. We show that the related modeling errors can be efficiently compensated for by the approximation error approach. We also show that recovery from simultaneous discretization related errors is feasible, allowing the use of computationally efficient reduced order models.

Journal ArticleDOI
TL;DR: Generic methods to combine multiple CAD systems and investigate what kind of performance increase can be expected are presented.
Abstract: Computer-aided detection (CAD) is increasingly used in clinical practice and for many applications a multitude of CAD systems have been developed. In practice, CAD systems have different strengths and weaknesses and it is therefore interesting to consider their combination. In this paper, we present generic methods to combine multiple CAD systems and investigate what kind of performance increase can be expected. Experimental results are presented using data from the ANODE09 and ROC09 online CAD challenges for the detection of pulmonary nodules in computed tomography scans and red lesions in retinal images, respectively. For both applications, combination results in a large and significant increase in performance when compared to the best individual CAD system.