scispace - formally typeset
Search or ask a question
Author

Dong Hye Ye

Bio: Dong Hye Ye is an academic researcher from Marquette University. The author has contributed to research in topics: Iterative reconstruction & Image quality. The author has an hindex of 17, co-authored 48 publications receiving 3488 citations. Previous affiliations of Dong Hye Ye include Seoul National University & Purdue University.

Papers
More filters
Journal ArticleDOI
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

3,699 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel framework for large deformation registration using the learned manifold of anatomical variation in the data, and demonstrates the advantages of the proposed framework over direct registration with both simulated and real databases of brain images.

125 citations

Book ChapterDOI
22 Sep 2013
TL;DR: A general database-driven framework for coherent synthesis of subject-specific scans of desired modality, which adopts and generalizes the patch-based label propagation (LP) strategy, and introduces a new data-driven regularization scheme that integrates intermediate estimates within an iterative search-and-synthesis strategy.
Abstract: We propose a general database-driven framework for coherent synthesis of subject-specific scans of desired modality, which adopts and generalizes the patch-based label propagation (LP) strategy. While modality synthesis has received increased attention lately, current methods are mainly tailored to specific applications. On the other hand, the LP framework has been extremely successful for certain segmentation tasks, however, so far it has not been used for estimation of entities other than categorical segmentation labels. We approach the synthesis task as a modality propagation, and demonstrate that with certain modifications the LP framework can be generalized to continuous settings providing coherent synthesis of different modalities, beyond segmentation labels. To achieve high-quality estimates we introduce a new data-driven regularization scheme, in which we integrate intermediate estimates within an iterative search-and-synthesis strategy. To efficiently leverage population data and ensure coherent synthesis, we employ a spatio-population search space restriction. In experiments, we demonstrate the quality of synthesis of different MRI signals (T2 and DTI-FA) from a T1 input, and show a novel application of modality synthesis for abnormality detection in multi-channel MRI of brain tumor patients.

105 citations

01 Oct 2012
TL;DR: This submission to the Brain Tumor Segmentation Challenge (BraTS) at MICCAI 2012 is described, which is based on the method for tissue-specic segmentation of high-grade brain tumors, and is able to capture the context information for each data point.
Abstract: We describe our submission to the Brain Tumor Segmenta- tion Challenge (BraTS) at MICCAI 2012, which is based on our method for tissue-specic segmentation of high-grade brain tumors (3). The main idea is to cast the segmentation as a classication task, and use the discriminative power of context information. We realize this idea by equipping a classication forest (CF) with spatially non-local features to represent the data, and by providing the CF with initial probability estimates for the single tissue classes as additional input (along-side the MRI channels). The initial probabilities are patient-specic, and com- puted at test time based on a learned model of intensity. Through the combination of the initial probabilities and the non-local features, our approach is able to capture the context information for each data point. Our method is fully automatic, with segmentation run times in the range of 1-2 minutes per patient. We evaluate the submission by cross- validation on the real and synthetic, high- and low-grade tumor BraTS data sets.

88 citations

Proceedings ArticleDOI
01 Oct 2016
TL;DR: A new approach to detect and track UAVs from a single camera mounted on a different UAV, which finds spatio-temporal traits of each moving object through optical flow matching and classify those candidate targets based on their motion patterns compared with the background.
Abstract: Despite the recent flight control regulations, Unmanned Aerial Vehicles (UAVs) are still gaining popularity in civilian and military applications, as much as for personal use. Such emerging interest is pushing the development of effective collision avoidance systems. Such systems play a critical role UAVs operations especially in a crowded airspace setting. Because of cost and weight limitations associated with UAVs payload, camera based technologies are the de-facto choice for collision avoidance navigation systems. This requires multi-target detection and tracking algorithms from a video, which can be run on board efficiently. While there has been a great deal of research on object detection and tracking from a stationary camera, few have attempted to detect and track small UAVs from a moving camera. In this paper, we present a new approach to detect and track UAVs from a single camera mounted on a different UAV. Initially, we estimate background motions via a perspective transformation model and then identify distinctive points in the background subtracted image. We find spatio-temporal traits of each moving object through optical flow matching and then classify those candidate targets based on their motion patterns compared with the background. The performance is boosted through Kalman filter tracking. This results in temporal consistency among the candidate detections. The algorithm was validated on video datasets taken from a UAV. Results show that our algorithm can effectively detect and track small UAVs with limited computing resources.

80 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Two specific computer-aided detection problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied, achieving the state-of-the-art performance on the mediastinal LN detection, and the first five-fold cross-validation classification results are reported.
Abstract: Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

4,249 citations

Journal ArticleDOI
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

3,699 citations

Journal ArticleDOI
TL;DR: An efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data, and improves on the state-of-the‐art for all three applications.

2,842 citations

Journal ArticleDOI
TL;DR: A fast and accurate fully automatic method for brain tumor segmentation which is competitive both in terms of accuracy and speed compared to the state of the art, and introduces a novel cascaded architecture that allows the system to more accurately model local label dependencies.

2,538 citations

Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations