scispace - formally typeset
Search or ask a question

Showing papers by "Ender Konukoglu published in 2011"


28 Oct 2011
TL;DR: A unified, efficient model of random decision forests which can be applied to a number of machine learning, computer vision and medical image analysis tasks and how alternatives such as random ferns and extremely randomized trees stem from the more general model is discussed.
Abstract: This paper presents a unified, efficient model of random decision forests which can be applied to a number of machine learning, computer vision and medical image analysis tasks Our model extends existing forest-based techniques as it unifies classification, regression, density estimation, manifold learning, semi-supervised learning and active learning under the same decision forest framework This means that the core implementation needs be written and optimized only once, and can then be applied to many diverse tasks The proposed model may be used both in a generative or discriminative way and may be applied to discrete or continuous, labelled or unlabelled data The main contributions of this paper are: 1) proposing a single, probabilistic and efficient model for a variety of learning tasks; 2) demonstrating margin-maximizing properties of classification forests; 3) introducing density forests for learning accurate probability density functions; 4) proposing efficient algorithms for sampling from the forest generative model; 5) introducing manifold forests for non-linear embedding and dimensionality reduction; 6) proposing new and efficient forest-based algorithms for transductive and active learning We discuss how alternatives such as random ferns and extremely randomized trees stem from our more general model This paper is directed at both students who wish to learn the basics of decision forests, as well as researchers interested in our new contributions It presents both fundamental and novel concepts in a structured way, with many illustrative examples and real-world applications Thorough comparisons with state of the art algorithms such as support vector machines, boosting and Gaussian processes are presented and relative advantages and disadvantages discussedThe many synthetic examples and existing commercial applications demonstrate the validity of the proposed model and its flexibility Powerpoint slides (with many examples and animations) are also available from http://researchmicrosoftcom/groups/vision/decisionforestsaspx

309 citations


Journal ArticleDOI
TL;DR: In an a posteriori analysis, it is shown how selected features during classification can be ranked according to their discriminative power and reveal the most important ones.

251 citations


Journal ArticleDOI
TL;DR: This work proposes an efficient Bayesian inference method for model personalization using polynomial chaos and compressed sensing and demonstrates how this can help in quantifying the impact of the data characteristics on the personalization (and thus prediction) results.
Abstract: Biophysical models are increasingly used for medical applications at the organ scale. However, model predictions are rarely associated with a confidence measure although there are important sources of uncertainty in computational physiology methods. For instance, the sparsity and noise of the clinical data used to adjust the model parameters (personalization), and the difficulty in modeling accurately soft tissue physiology. The recent theoretical progresses in stochastic models make their use computationally tractable, but there is still a challenge in estimating patient-specific parameters with such models. In this work we propose an efficient Bayesian inference method for model personalization using polynomial chaos and compressed sensing. This method makes Bayesian inference feasible in real 3D modeling problems. We demonstrate our method on cardiac electrophysiology. We first present validation results on synthetic data, then we apply the proposed method to clinical data. We demonstrate how this can help in quantifying the impact of the data characteristics on the personalization (and thus prediction) results. Described method can be beneficial for the clinical use of personalized models as it explicitly takes into account the uncertainties on the data and the model parameters while still enabling simulations that can be used to optimize treatment. Such uncertainty handling can be pivotal for the proper use of modeling as a clinical tool, because there is a crucial requirement to know the confidence one can have in personalized models.

101 citations


Book ChapterDOI
03 Jul 2011
TL;DR: This paper presents a new supervised learning framework for the efficient recognition and segmentation of anatomical structures in 3D computed tomography (CT), with as little training data as possible, and a combined generative-discriminative model which increases segmentation accuracy.
Abstract: This paper presents a new supervised learning framework for the efficient recognition and segmentation of anatomical structures in 3D computed tomography (CT), with as little training data as possible. Training supervised classifiers to recognize organs within CT scans requires a large number of manually delineated exemplar 3D images, which are very expensive to obtain. In this study, we borrow ideas from the field of active learning to optimally select a minimum subset of such images that yields accurate anatomy segmentation. The main contribution of this work is in designing a combined generative-discriminative model which: i) drives optimal selection of training data; and ii) increases segmentation accuracy. The optimal training set is constructed by finding unlabeled scans which maximize the disagreement between our two complementary probabilistic models, as measured by a modified version of the Jensen-Shannon divergence. Our algorithm is assessed on a database of 196 labeled clinical CT scans with high variability in resolution, anatomy, pathologies, etc. Quantitative evaluation shows that, compared with randomly selecting the scans to annotate, our method decreases the number of training images by up to 45%. Moreover, our generative model of body shape substantially increases segmentation accuracy when compared to either using the discriminative model alone or a generic smoothness prior (e.g. via a Markov Random Field).

75 citations


Book ChapterDOI
03 Jul 2011
TL;DR: In this paper, a joint generative model of tumor growth and image observation is proposed for analyzing imaging data in patients with glioma, which can be used for integrating information from different multi-modal imaging protocols.
Abstract: Extensive imaging is routinely used in brain tumor patients to monitor the state of the disease and to evaluate therapeutic options. A large number of multi-modal and multi-temporal image volumes is acquired in standard clinical cases, requiring new approaches for comprehensive integration of information from different image sources and different time points. In this work we propose a joint generative model of tumor growth and of image observation that naturally handles multimodal and longitudinal data. We use the model for analyzing imaging data in patients with glioma. The tumor growth model is based on a reaction-diffusion framework. Model personalization relies only on a forward model for the growth process and on image likelihood. We take advantage of an adaptive sparse grid approximation for efficient inference via Markov Chain Monte Carlo sampling. The approach can be used for integrating information from different multi-modal imaging protocols and can easily be adapted to other tumor growth models.

64 citations


01 Jun 2011
TL;DR: A joint generative model of tumor growth and of image observation that naturally handles multimodal and longitudinal data is proposed that can be used for integrating information from different multi-modal imaging protocols and can be adapted to other tumor growth models.
Abstract: 22nd International Conference, IPMI 2011, Kloster Irsee, Germany, July 3-8, 2011. Proceedings

50 citations


Journal ArticleDOI
TL;DR: The authors' patient growth pattern simulations showed microscopic invasion beyond irradiation margins for both combinations of high-diffusion/low-proliferation and low-diffusions/high-proLiferation rate scenarios and indicated that some healthy brain tissue that was projected to be safe from recurrence fell inside treatment margins.

29 citations


Journal ArticleDOI
TL;DR: A fast model of the cardiac electrophysiology based on an eikonal formulation implemented with an anisotropic fast marching method is proposed and demonstrated in the context of a simulator of radio-frequency ablation of cardiac arrhythmia from patient-specific medical imaging data.

29 citations


01 Jan 2011
TL;DR: A patient-specific model of tumor growth may provide new means for analyzing the acquired images and evaluating patient’s options, as all observations in these data sets arise from one underlying physiological process – the tumor-induced change of the tissue.
Abstract: In the diagnosis of brain tumors, extensive imaging protocols are routinely used to evaluate therapeutic options or to monitor the state of the disease. This gives rise to large numbers of multi-modal and multi-temporal image volumes even in standard clinical settings (Figure 1), requiring new approaches for comprehensively integrating information of different image sources and different time points. As all observations in these data sets arise from one underlying physiological process – the tumor-induced change of the tissue – a patient-specific model of tumor growth may provide new means for analyzing the acquired images and evaluating patient’s options. Mathematical tumor growth models try to explain the complex dynamics of cancer progression as a function of biological processes, which are assumed or known from prior experiments. Examples of such processes are the dynamics of individual tumor cells, their interactions with each other, their interactions with the surrounding tissue through mechanical or biochemical mechanisms or the generation, transport and allocation of substances relevant to specific biochemical processes.

24 citations


Journal ArticleDOI
TL;DR: The sensitivity of experts' visual inspection is often too low to detect subtle growth of meningiomas from longitudinal scans and an alternative metric is described that provides accurate and robust measurements of subtle tumor changes while requiring a minimal amount of user input.
Abstract: BACKGROUND: Change detection is a critical component in the diagnosis and monitoring of many slowly evolving pathologies. OBJECTIVE: This article describes a semiautomatic monitoring approach using longitudinal medical images. We test the method on brain scans of patients with meningioma, which experts have found difficult to monitor because the tumor evolution is very slow and may be obscured by artifacts related to image acquisition. METHODS: We describe a semiautomatic procedure targeted toward identifying difficult-to-detect changes in brain tumor imaging. The tool combines input from a medical expert with state-of-the-art technology. The software is easy to calibrate and, in less than 5 minutes, returns the total volume of tumor change in mm 3 . We test the method on postgadolinium, T1-weighted magnetic resonance images of 10 patients with meningioma and compare our results with experts’ findings. We also perform benchmark testing with synthetic data. RESULTS: Our experiments indicated that experts’ visual inspections are not sensitive enough to detect subtle growth. Measurements based on experts’ manual segmentations were highly accurate but also labor intensive. The accuracy of our approach was comparable to the experts’ results. However, our approach required far less user input and generated more consistent measurements. CONCLUSION: The sensitivity of experts’ visual inspection is often too low to detect subtle growth of meningiomas from longitudinal scans. Measurements based on experts’ segmentation are highly accurate but generally too labor intensive for standard clinical settings. We described an alternative metric that provides accurate and robust measurements of subtle tumor changes while requiring a minimal amount of user input.

22 citations


Proceedings ArticleDOI
TL;DR: This work proposes a novel robust algorithm for automatic global linear image registration that uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes and yields robust registration results.
Abstract: Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%.

01 Jan 2011
TL;DR: This work demonstrates use cases of the Fisher-Kolmogorov glioma growth model in radiotherapy planning of a clinical case and analyzes the crucial input parameters to the model, in particular, the need for reliable segmentation of anatomical boundaries such as the falx cerebri and the tentorium cerebelli.
Abstract: Radiotherapy treatment planning requires a localization of the tumor within the patient. This is challenging to accomplish for micro- scopic in ltrative spread of disease that is not visible on current imaging modalities. Prime examples for in ltrative tumors are gliomas. With the help of mathematical models, common growth characteristics of gliomas, which are known from histopathological studies, can be incorporated in radiotherapy target delineation. This requires an imaging based personal- ization of the model to the individual patient. We demonstrate use cases of the Fisher-Kolmogorov glioma growth model in radiotherapy planning of a clinical case. We further analyze the crucial input parameters to the model, in particular, the need for reliable segmentation of anatomical boundaries such as the falx cerebri and the tentorium cerebelli.

Journal ArticleDOI
TL;DR: Since current planning procedures based on isotropic margins do not reflect known growth characteristics of glioma, computational tumor growth models have the potential to improve radiotherapy efficacy despite uncertainties in model parameters.
Abstract: Purpose: We investigate treatment planning for intensity‐modulated radiotherapy (IMRT) for glioma based on a computational tumor growth model. Currently, the target consists of the tumor mass visible on MRI plus a 1–2 centimeter isotropic expansion to account for tumorcellinfiltration into adjacent brain tissue. Observations suggest that glioma primarily grow along white matter fiber tracts. Accounting for the anisotropic infiltration pattern may potentially improve the efficacy of radiotherapy and prolong survival. Methods: We assume that tumor growth is characterized by local proliferation of tumorcells and diffusion into neighboring tissue. Mathematically, this can be described via a partial differential equation of reaction‐diffusion type (Fisher‐Kolmogoroff‐Equation). Diffusion‐Tensor‐Imaging provides the spatial distribution of the diffusion tensor. Solution of the model equations yields a spatial distribution of the tumorcell density. Radiation‐induced cell kill is described via the linear‐quadratic cell survival model. Modification of current treatment planning procedures can be approached in three steps: 1) the target contour is adaped to match an isoline of the tumorcell density while keeping the total volume constant; 2) dose within the target can be redistributed to maximize overall cell kill while keeping the integral dose constant; 3) unavoidable dose outside the target can be redistributed to affect brain tissue more likely to be infiltrated by tumorcells. Results: Treatment planning studies suggest that overall cell survival can be reduced by 1–3 orders of magnitude compared to the isotropic margin approach — depending on model parameters. Accounting for uncertainties in model parameters reduces the expected benefit, but a sensitivity analysis over a range of realistic parameters suggests that substantial improvements can be expected. Conclusions: Since current planning procedures based on isotropic margins do not reflect known growth characteristics of glioma, computational tumor growth models have the potential to improve radiotherapy efficacy despite uncertainties in model parameters. J Unkelbach receives funding from Philips Medical Systems; E Konukoglu receives funding from Microsoft Research.