Bio: Ron Kikinis is an academic researcher from Brigham and Women's Hospital. The author has contributed to research in topics: Segmentation & Diffusion MRI. The author has an hindex of 126, co-authored 684 publications receiving 63398 citations. Previous affiliations of Ron Kikinis include University of Zurich & University of Tokyo.
Papers published on a yearly basis
TL;DR: An overview of 3D Slicer is presented as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications and the utility of the platform in the scope of QIN is illustrated.
Abstract: Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer.
TL;DR: In this paper, an information-theoretic approach for finding the registration of volumetric medical images of differing modalities is presented, which is achieved by adjustment of the relative position and orientation until the mutual information between the images is maximized.
Abstract: A new information-theoretic approach is presented for finding the registration of volumetric medical images of differing modalities. Registration is achieved by adjustment of the relative position and orientation until the mutual information between the images is maximized. In our derivation of the registration procedure, few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and can foreseeably be used with a wide variety of imaging devices. This approach works directly with image data; no pre-processing or segmentation is required. This technique is, however, more flexible and robust than other intensity-based techniques like correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images with computed tomography (CT) images, and with positron-emission tomography (PET) images. Surgical applications of the registration method are described.
TL;DR: The DSC value is a simple and useful summary measure of spatial overlap, which can be applied to studies of reproducibility and accuracy in image segmentation, and may be adapted for similar validation tasks.
Abstract: Magnetic resonance imaging (MRI) provides indispensable information about anatomy and pathology, enabling quantitative pathologic and clinical evaluations. Segmentation is an important image-processing step by which regions of an image are classified according to the presence of relevant anatomic features. For example, segmentation of MRI of the brain assigns a unique label (eg, white matter, gray matter, lesions, cerebrospinal fluid, to each voxel in an input gray-scale image) (1). Segmentation methods typically yield binary or categoric classification results. However, continuous classification schemes (eg, volume size, distance between the volume surfaces, percentage of overlap voxels, percentage of highly discrepant voxels, and probability-based fractional segmentation) are increasingly becoming commonplace (2,3). The performance of segmentation methods has a direct impact on the detection and target definition, as well as monitoring of disease progression. Thus, the main clinical goal of surgical planning and quantitative monitoring of disease progression requires segmentation methods with high reproducibility because of the limited number of images available per patient. Several recent articles have addressed the importance of developing new automated segmentation methods in addition to binary classification, using overlap mixture intensity distributions of abnormal and normal tissues (4–6), as well as a probabilistic fractional segmentation methods on a continuous probability scale per voxel (2,3). These methods used geometric and probabilistic models to allow improved tissue volume estimates and contrast among tissue types (7–10). An overview and comparison of several existing algorithms for brain segmentation (eg, finite normal mixture histograms, genetic algorithms, and hidden Markov random field methods using percent correct identified voxels against digital phantoms) are found in the literature (7). However, it is a challenging task to evaluate the accuracy and reproducibility of MRI segmentations. To conduct a validation analysis of the quality of image segmentation, it is typically necessary to know a voxel-wise gold standard. Under a simple binary truth (here labeled, T), this gold standard is defined as an indicator of true tissue class per voxel, ie, the target class (C1) such as malignant tumor, and the background class (C0) such as the remaining healthy tissues. Unfortunately, it is often impractical to know T only based on clinical data. Various alternative methods have been sought to carry out statistical validations. A useful method is to construct phantoms, either physically or digitally, with known T, specified before building such a phantom. Because it is difficult to construct a physical phantom that can mimic the tissue properties of the human body, great efforts have been devoted to building digital phantoms that are both realistic and assessable by the radiologic community. Simulated MR digital brain phantom images of a normal subject or one with multiple sclerosis may be downloaded online from the Montreal BrainWeb (http://www.bic.mni.mcgill.ca/brainweb) (11,12). Nevertheless, even sophisticated phantoms may not yield clinical images with full range of characteristics frequently observed in practice, such as partial volume artifacts, intensity heterogeneity, noise, and normal and pathologic anatomic variability. Without a known gold standard obtained by non-imaging methods such as histology, the validation task becomes an assessment of reliability or reproducibility of segmentation. A simple spatial overlap index is the Dice similarity coefficient (DSC), first proposed by Dice (13). Dice similarity coefficient is a spatial overlap index and a reproducibility validation metric. It was also called the proportion of specific agreement by Fleiss (14). The value of a DSC ranges from 0, indicating no spatial overlap between two sets of binary segmentation results, to 1, indicating complete overlap. Dice similarity coefficient has been adopted to validate the segmentation of white matter lesions in MRIs (15) and the peripheral zone (PZ) of the prostate gland in prostate brachytherapy (16). Other validation metrics considered for statistical validation included Jaccard similarity coefficient (17), odds ratio (18), receiver operating characteristic analysis (19–22), mutual information (3,22), and distance-based statistics (23,24). In the present work, we applied and extended the DSC metric on two clinical examples analyzed previously. We aimed to validate (A) repeated binary segmentation of preoperative 1.5T and intraoperative 0.5T MRIs of the prostate’s PZ collected before and during brachytherapy for prostate cancer (16); and (B) semi-automated probabilistic fractional segmentation of MRIs of three different types of brain tumors, against a composite voxel-wise gold standard derived from repeated expert manual segmentations of the images (25). For both the prostate and brain datasets, segmentations were performed and reported previously (16,25). Here we have extended our methodology and shown a statistical validation analysis using these existing databases.
TL;DR: Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging data, that has proven to be effective in a study that includes more than 1000 brain scans.
Abstract: Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.
TL;DR: In contrast to acquisition-based noise reduction methods a postprocess based on anisotropic diffusion is proposed, which overcomes the major drawbacks of conventional filter methods, namely the blurring of object boundaries and the suppression of fine structural details.
Abstract: In contrast to acquisition-based noise reduction methods a postprocess based on anisotropic diffusion is proposed. Extensions of this technique support 3-D and multiecho magnetic resonance imaging (MRI), incorporating higher spatial and spectral dimensions. The procedure overcomes the major drawbacks of conventional filter methods, namely the blurring of object boundaries and the suppression of fine structural details. The simplicity of the filter algorithm permits an efficient implementation, even on small workstations. The efficient noise reduction and sharpening of object boundaries are demonstrated by applying this image processing technique to 2-D and 3-D spin echo and gradient echo MR data. The potential advantages for MRI, diagnosis, and computerized analysis are discussed in detail. >
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …
TL;DR: An automated labeling system for subdividing the human cerebral cortex into standard gyral-based neuroanatomical regions is both anatomically valid and reliable and may be useful for both morphometric and functional studies of the cerebral cortex.
Abstract: In this study, we have assessed the validity and reliability of an automated labeling system that we have developed for subdividing the human cerebral cortex on magnetic resonance images into gyral based regions of interest (ROIs). Using a dataset of 40 MRI scans we manually identified 34 cortical ROIs in each of the individual hemispheres. This information was then encoded in the form of an atlas that was utilized to automatically label ROIs. To examine the validity, as well as the intra- and inter-rater reliability of the automated system, we used both intraclass correlation coefficients (ICC), and a new method known as mean distance maps, to assess the degree of mismatch between the manual and the automated sets of ROIs. When compared with the manual ROIs, the automated ROIs were highly accurate, with an average ICC of 0.835 across all of the ROIs, and a mean distance error of less than 1 mm. Intra- and inter-rater comparisons yielded little to no difference between the sets of ROIs. These findings suggest that the automated method we have developed for subdividing the human cerebral cortex into standard gyral-based neuroanatomical regions is both anatomically valid and reliable. This method may be useful for both morphometric and functional studies of the cerebral cortex as well as for clinical investigations aimed at tracking the evolution of disease-induced changes over time, including clinical trials in which MRI-based measures are used to examine response to treatment.
TL;DR: A set of automated procedures for obtaining accurate reconstructions of the cortical surface are described, which have been applied to data from more than 100 subjects, requiring little or no manual intervention.
Abstract: Several properties of the cerebral cortex, including its columnar and laminar organization, as well as the topographic organization of cortical areas, can only be properly understood in the context of the intrinsic two-dimensional structure of the cortical surface. In order to study such cortical properties in humans, it is necessary to obtain an accurate and explicit representation of the cortical surface in individual subjects. Here we describe a set of automated procedures for obtaining accurate reconstructions of the cortical surface, which have been applied to data from more than 100 subjects, requiring little or no manual intervention. Automated routines for unfolding and flattening the cortical surface are described in a companion paper. These procedures allow for the routine use of cortical surface-based analysis and visualization methods in functional brain imaging. r 1999 Academic Press
TL;DR: The characteristics of augmented reality systems are described, including a detailed discussion of the tradeoffs between optical and video blending approaches, and current efforts to overcome these problems are summarized.
Abstract: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality.