scispace - formally typeset
Search or ask a question
Author

F. Jager

Bio: F. Jager is an academic researcher from University of Erlangen-Nuremberg. The author has contributed to research in topics: Image segmentation & Real-time MRI. The author has an hindex of 1, co-authored 1 publications receiving 66 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A novel method for MRI signal intensity standardization of arbitrary MRI scans, so as to create a pulse sequence dependent standard intensity scale, which is the first approach that uses the properties of all acquired images jointly.
Abstract: A major disadvantage of magnetic resonance imaging (MRI) compared to other imaging modalities like computed tomography is the fact that its intensities are not standardized. Our contribution is a novel method for MRI signal intensity standardization of arbitrary MRI scans, so as to create a pulse sequence dependent standard intensity scale. The proposed method is the first approach that uses the properties of all acquired images jointly (e.g., T1- and T2-weighted images). The image properties are stored in multidimensional joint histograms. In order to normalize the probability density function (pdf) of a newly acquired data set, a nonrigid image registration is performed between a reference and the joint histogram of the acquired images. From this matching a nonparametric transformation is obtained, which describes a mapping between the corresponding intensity spaces and subsequently adapts the image properties of the newly acquired series to a given standard. As the proposed intensity standardization is based on the probability density functions of the data sets only, it is independent of spatial coherence or prior segmentations of the reference and current images. Furthermore, it is not designed for a particular application, body region or acquisition protocol. The evaluation was done using two different settings. First, MRI head images were used, hence the approach can be compared to state-of-the-art methods. Second, whole body MRI scans were used. For this modality no other normalization algorithm is known in literature. The Jeffrey divergence of the pdfs of the whole body scans was reduced by 45%. All used data sets were acquired during clinical routine and thus included pathologies.

69 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An optimised pipeline for multi-atlas brain MRI segmentation is introduced and intensity differences for intensity normalised images can be used instead of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time.

430 citations

Journal ArticleDOI
23 Nov 2015-PLOS ONE
TL;DR: A learning-based, unified random forest regression and classification framework to tackle the problems of fully automatic localization and segmentation of 3D vertebral bodies from CT/MR images is proposed.
Abstract: In this paper, we address the problems of fully automatic localization and segmentation of 3D vertebral bodies from CT/MR images. We propose a learning-based, unified random forest regression and classification framework to tackle these two problems. More specifically, in the first stage, the localization of 3D vertebral bodies is solved with random forest regression where we aggregate the votes from a set of randomly sampled image patches to get a probability map of the center of a target vertebral body in a given image. The resultant probability map is then further regularized by Hidden Markov Model (HMM) to eliminate potential ambiguity caused by the neighboring vertebral bodies. The output from the first stage allows us to define a region of interest (ROI) for the segmentation step, where we use random forest classification to estimate the likelihood of a voxel in the ROI being foreground or background. The estimated likelihood is combined with the prior probability, which is learned from a set of training data, to get the posterior probability of the voxel. The segmentation of the target vertebral body is then done by a binary thresholding of the estimated probability. We evaluated the present approach on two openly available datasets: 1) 3D T2-weighted spine MR images from 23 patients and 2) 3D spine CT images from 10 patients. Taking manual segmentation as the ground truth (each MR image contains at least 7 vertebral bodies from T11 to L5 and each CT image contains 5 vertebral bodies from L1 to L5), we evaluated the present approach with leave-one-out experiments. Specifically, for the T2-weighted MR images, we achieved for localization a mean error of 1.6 mm, and for segmentation a mean Dice metric of 88.7% and a mean surface distance of 1.5 mm, respectively. For the CT images we achieved for localization a mean error of 1.9 mm, and for segmentation a mean Dice metric of 91.0% and a mean surface distance of 0.9 mm, respectively.

103 citations

Journal ArticleDOI
TL;DR: Feature-Based Alignment (FBA) as mentioned in this paperBA is a general method for efficient and robust model-to-image alignment, where features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution.

100 citations

Journal ArticleDOI
TL;DR: A histogram-based MRI intensity normalization method is proposed that can normalize scans which were acquired on different MRI units and can create a higher quality Chinese brain template.
Abstract: Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.

85 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images with validated method on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies and achieves better or comparable results.
Abstract: This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6–2.0 mm, and for segmentation a mean Dice metric of 85%–88% and a mean surface distance of 1.3–1.4 mm.

64 citations