scispace - formally typeset
Search or ask a question
Author

Yoonmi Hong

Other affiliations: KAIST, Yonsei University, Samsung
Bio: Yoonmi Hong is an academic researcher from University of North Carolina at Chapel Hill. The author has contributed to research in topics: Diffusion MRI & Interpolation. The author has an hindex of 12, co-authored 56 publications receiving 1072 citations. Previous affiliations of Yoonmi Hong include KAIST & Yonsei University.


Papers
More filters
Journal ArticleDOI
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.

1,056 citations

Journal ArticleDOI
TL;DR: A novel video compression scheme based on a highly flexible hierarchy of unit representation which includes three block concepts: coding unit (CU), prediction unit (PU), and transform unit (TU), which was a candidate in the competitive phase of the high-efficiency video coding (HEVC) standardization work.
Abstract: This paper proposes a novel video compression scheme based on a highly flexible hierarchy of unit representation which includes three block concepts: coding unit (CU), prediction unit (PU), and transform unit (TU). This separation of the block structure into three different concepts allows each to be optimized according to its role; the CU is a macroblock-like unit which supports region splitting in a manner similar to a conventional quadtree, the PU supports nonsquare motion partition shapes for motion compensation, while the TU allows the transform size to be defined independently from the PU. Several other coding tools are extended to arbitrary unit size to maintain consistency with the proposed design, e.g., transform size is extended up to 64 × 64 and intraprediction is designed to support an arbitrary number of angles for variable block sizes. Other novel techniques such as a new noncascading interpolation Alter design allowing arbitrary motion accuracy and a leaky prediction technique using both open-loop and closed-loop predictors are also introduced. The video codec described in this paper was a candidate in the competitive phase of the high-efficiency video coding (HEVC) standardization work. Compared to H.264/AVC, it demonstrated bit rate reductions of around 40% based on objective measures and around 60% based on subjective testing with 1080 p sequences. It has been partially adopted into the first standardization model of the collaborative phase of the HEVC effort.

193 citations

Book ChapterDOI
Yeonggul Jang1, Yoonmi Hong1, Seongmin Ha1, Sekeun Kim1, Hyuk Jae Chang1 
10 Sep 2017
TL;DR: A fully convolutional neural network is presented to efficiently segment LV and RV as well as myocardium in cine-MRI to analyze cardiac function and viability.
Abstract: Automatic and accurate segmentation of Left Ventricle (LV) and Right Ventricle (RV) in cine-MRI is required to analyze cardiac function and viability. We present a fully convolutional neural network to efficiently segment LV and RV as well as myocardium. The network is trained end-to-end from scratch. Average dice scores from five-fold cross-validation on the ACDC training dataset were 0.94, 0.89, and 0.88 for LV, RV, and myocardium. Experimental results show the robustness of the proposed architecture.

63 citations

Patent
Alexander Alshin1, Elena Alshina1, Chen Jianle1, Han Woo-Jin1, Nikolay Shlyakhov1, Yoonmi Hong1 
30 Sep 2011
TL;DR: In this article, a method of interpolating an image by determining interpolation filter coefficients is presented, based on a sub-pel-unit interpolation location and a smoothness.
Abstract: Provided are a method of interpolating an image by determining interpolation filter coefficients, and an apparatus for performing the same. The method includes: differently selecting an interpolation filter, from among interpolation filters for generating at least one sub-pel-unit pixel value located between integer-pel-unit pixels, based on a sub-pel-unit interpolation location and a smoothness; and generating the at least one sub-pel-unit pixel value by interpolating, using the selected interpolation filter, pixel values of the integer-pel-unit pixels.

38 citations

Patent
05 Apr 2011
TL;DR: In this article, the first filter is selected from among a plurality of different filters for interpolating between pixel values of integer pixel units, according to an interpolation location, and the selected filter is used to generate at least one pixel value of at least 1 fractional pixel unit by using the selected first filter.
Abstract: Provided are a method and apparatus for interpolating an image. The method includes: selecting a first filter, from among a plurality of different filters, for interpolating between pixel values of integer pixel units, according to an interpolation location; and generating at least one pixel value of at least one fractional pixel unit by interpolating between the pixel values of the integer pixel units by using the selected first filter.

32 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: nnU-Net as mentioned in this paper is a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task.
Abstract: Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.

2,040 citations

Journal ArticleDOI
TL;DR: How far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies is measured, to open the door to highly accurate and fully automatic analysis of cardiac CMRI.
Abstract: Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the “Automatic Cardiac Diagnosis Challenge” dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.

1,056 citations

Journal ArticleDOI
TL;DR: An automated analysis method based on a fully convolutional network achieves a performance on par with human experts in analysing CMR images and deriving clinically relevant measures.
Abstract: Cardiovascular resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images. Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV) end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV). By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. On a short-axis image test set of 600 subjects, it achieves an average Dice metric of 0.94 for the LV cavity, 0.88 for the LV myocardium and 0.90 for the RV cavity. The mean absolute difference between automated measurement and manual measurement is 6.1 mL for LVEDV, 5.3 mL for LVESV, 6.9 gram for LVM, 8.5 mL for RVEDV and 7.2 mL for RVESV. On long-axis image test sets, the average Dice metric is 0.93 for the LA cavity (2-chamber view), 0.95 for the LA cavity (4-chamber view) and 0.96 for the RA cavity (4-chamber view). The performance is comparable to human inter-observer variability. We show that an automated method achieves a performance on par with human experts in analysing CMR images and deriving clinically relevant measures.

512 citations

Journal ArticleDOI
Il-Koo Kim1, Min Jung-Hye1, Tammy Lee1, Woo-Jin Han2, Jeong-Hoon Park1 
TL;DR: Technical details of the block partitioning structure of HEVC are introduced with an emphasis on the method of designing a consistent framework by combining the three different units together and experimental results are provided to justify the role of each component.
Abstract: High Efficiency Video Coding (HEVC) is the latest joint standardization effort of ITU-T WP 3/16 and ISO/IEC JTC 1/SC 29/WG 11. The resultant standard will be published as twin text by ITU-T and ISO/IEC; in the latter case, it will also be known as MPEG-H Part 2. This paper describes the block partitioning structure of the draft HEVC standard and presents the results of an analysis of coding efficiency and complexity. Of the many new technical aspects of HEVC, the block partitioning structure has been identified as representing one of the most significant changes relative to previous video coding standards. In contrast to the fixed size 16 × 16 macroblock structure of H.264/AVC, HEVC defines three different units according to their functionalities. The coding unit defines a region sharing the same prediction mode, e.g., intra and inter, and it is represented by the leaf node of a quadtree structure. The prediction unit defines a region sharing the same prediction information. The transform unit, specified by another quadtree, defines a region sharing the same transformation. This paper introduces technical details of the block partitioning structure of HEVC with an emphasis on the method of designing a consistent framework by combining the three different units together. Experimental results are provided to justify the role of each component of the block partitioning structure and a comparison with the H.264/AVC design is performed.

433 citations