scispace - formally typeset
Search or ask a question
Author

Xiaotao Guo

Bio: Xiaotao Guo is an academic researcher from Columbia University Medical Center. The author has contributed to research in topics: Digital watermarking & Image segmentation. The author has an hindex of 14, co-authored 28 publications receiving 3374 citations. Previous affiliations of Xiaotao Guo include Shanghai Jiao Tong University & Brigham and Women's Hospital.

Papers
More filters
Journal ArticleDOI
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

3,699 citations

Journal ArticleDOI
TL;DR: At the tissue level, premenopausal women with more central adiposity had inferior bone quality and stiffness and markedly lower bone formation and the relationship between trunk fat and bone volume remained significant after controlling for age and BMI.
Abstract: Context: The conventional view that obesity is beneficial for bone strength has recently been challenged by studies that link obesity, particularly visceral obesity, to low bone mass and fractures. It is controversial whether effects of obesity on bone are mediated by increased bone resorption or decreased bone formation. Objective: The objective of the study was to evaluate bone microarchitecture and remodeling in healthy premenopausal women of varying weights. Design: We measured bone density and trunk fat by dual-energy x-ray absorptiometry in 40 women and by computed tomography in a subset. Bone microarchitecture, stiffness, remodeling, and marrow fat were assessed in labeled transiliac bone biopsies. Results: Body mass index (BMI) ranged from 20.1 to 39.2 kg/m2. Dual-energy x-ray absorptiometry-trunk fat was directly associated with BMI (r = 0.78, P < .001) and visceral fat by computed tomography (r = 0.79, P < .001). Compared with women in the lowest tertile of trunk fat, those in the highest tertil...

165 citations

Journal ArticleDOI
TL;DR: The use of fluorescein as a microsurgical adjunct for guiding GBM resection to facilitate safe maximal removal is supported and provides an easily visualized marker for glioma pathology in both CE and NCE regions of GBM.
Abstract: OBJECTIVE Extent of resection is an important prognostic factor in patients undergoing surgery for glioblastoma (GBM). Recent evidence suggests that intravenously administered fluorescein sodium associates with tumor tissue, facilitating safe maximal resection of GBM. In this study, the authors evaluate the safety and utility of intraoperative fluorescein guidance for the prediction of histopathological alteration both in the contrast-enhancing (CE) regions, where this relationship has been established, and into the non-CE (NCE), diffusely infiltrated margins. METHODS Thirty-two patients received fluorescein sodium (3 mg/kg) intravenously prior to resection. Fluorescence was intraoperatively visualized using a Zeiss Pentero surgical microscope equipped with a YELLOW 560 filter. Stereotactically localized biopsy specimens were acquired from CE and NCE regions based on preoperative MRI in conjunction with neuronavigation. The fluorescence intensity of these specimens was subjectively classified in real time with subsequent quantitative image analysis, histopathological evaluation of localized biopsy specimens, and radiological volumetric assessment of the extent of resection. RESULTS Bright fluorescence was observed in all GBMs and localized to the CE regions and portions of the NCE margins of the tumors, thus serving as a visual guide during resection. Gross-total resection (GTR) was achieved in 84% of the patients with an average resected volume of 95%, and this rate was higher among patients for whom GTR was the surgical goal (GTR achieved in 93.1% of patients, average resected volume of 99.7%). Intraoperative fluorescein staining correlated with histopathological alteration in both CE and NCE regions, with positive predictive values by subjective fluorescence evaluation greater than 96% in NCE regions. CONCLUSIONS Intraoperative administration of fluorescein provides an easily visualized marker for glioma pathology in both CE and NCE regions of GBM. These findings support the use of fluorescein as a microsurgical adjunct for guiding GBM resection to facilitate safe maximal removal.

118 citations

Journal ArticleDOI
TL;DR: This paper presents a lossless watermarking scheme in the sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images.
Abstract: This paper presents a lossless watermarking scheme in the sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images. In addition, the scheme has the capability of not introducing any embedding-induced distortion in the region of interest (ROI) of a medical image. Difference expansion of adjacent pixel values is employed to embed several bits. A region of embedding, which is represented by a polygon, is chosen intentionally to prevent introducing embedding distortion in the ROI. Only the vertex information of a polygon is transmitted to the decoder for reconstructing the embedding region, which improves the embedding capacity considerably. The digital signature of the whole image is embedded for verifying the integrity of the image. An identifier presented in electronic patient record (EPR) is embedded for verifying the authenticity by simultaneously processing the watermarked image and the EPR. Combining with fingerprint system, patient’s fingerprint information is embedded into several image slices and then extracted for verifying the authenticity.

90 citations

Journal ArticleDOI
TL;DR: A predictive model for patient OS that could potentially assist clinical decision making is developed with the use of machine learning techniques to analyze imaging features derived from pre- and posttherapy multimodal MRI.
Abstract: BACKGROUND Bevacizumab is a humanized antibody against vascular endothelial growth factor approved for treatment of recurrent glioblastoma. There is a need to discover imaging biomarkers that can aid in the selection of patients who will likely derive the most survival benefit from bevacizumab. METHODS The aim of the study was to examine if pre- and posttherapy multimodal MRI features could predict progression-free survival and overall survival (OS) for patients with recurrent glioblastoma treated with bevacizumab. The patient population included 84 patients in a training cohort and 42 patients in a testing cohort, separated based on pretherapy imaging date. Tumor volumes of interest were segmented from contrast-enhanced T1-weighted and fluid attenuated inversion recovery images and were used to derive volumetric, shape, texture, parametric, and histogram features. A total of 2293 pretherapy and 9811 posttherapy features were used to generate the model. RESULTS Using standard radiographic assessment criteria, the hazard ratio for predicting OS was 3.38 (P < .001). The hazard ratios for pre- and posttherapy features predicting OS were 5.10 (P < .001) and 3.64 (P < .005) for the training and testing cohorts, respectively. CONCLUSION With the use of machine learning techniques to analyze imaging features derived from pre- and posttherapy multimodal MRI, we were able to develop a predictive model for patient OS that could potentially assist clinical decision making.

80 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Two specific computer-aided detection problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied, achieving the state-of-the-art performance on the mediastinal LN detection, and the first five-fold cross-validation classification results are reported.
Abstract: Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

4,249 citations

Journal ArticleDOI
TL;DR: An efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data, and improves on the state-of-the‐art for all three applications.

2,842 citations

Journal ArticleDOI
TL;DR: A fast and accurate fully automatic method for brain tumor segmentation which is competitive both in terms of accuracy and speed compared to the state of the art, and introduces a novel cascaded architecture that allows the system to more accurately model local label dependencies.

2,538 citations

Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations