scispace - formally typeset
Search or ask a question
Author

Kyunghyun Sung

Bio: Kyunghyun Sung is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Medicine & Flip angle. The author has an hindex of 19, co-authored 65 publications receiving 1788 citations. Previous affiliations of Kyunghyun Sung include Ronald Reagan UCLA Medical Center & University of Southern California.


Papers
More filters
Journal ArticleDOI
TL;DR: An extension of k‐t FOCUSS to a more general framework with prediction and residual encoding, where the prediction provides an initial estimate and the residual encoding takes care of the remaining residual signals.
Abstract: A model-based dynamic MRI called k-t BLAST/SENSE has drawn significant attention from the MR imaging community because of its improved spatio-temporal resolution. Recently, we showed that the k-t BLAST/SENSE corresponds to the special case of a new dynamic MRI algorithm called k-t FOCUSS that is optimal from a compressed sensing perspective. The main contribution of this article is an extension of k-t FOCUSS to a more general framework with prediction and residual encoding, where the prediction provides an initial estimate and the residual encoding takes care of the remaining residual signals. Two prediction methods, RIGR and motion estimation/compensation scheme, are proposed, which significantly sparsify the residual signals. Then, using a more sophisticated random sampling pattern and optimized temporal transform, the residual signal can be effectively estimated from a very small number of k-t samples. Experimental results show that excellent reconstruction can be achieved even from severely limited k-t samples without aliasing artifacts. Magn Reson Med 61:103–116, 2009.

708 citations

Journal ArticleDOI
TL;DR: A novel deep learning approach with domain adaptation is proposed to restore high‐resolution MR images from under‐sampled k‐space data to solve the problem of streaking artifact patterns in magnetic resonance imaging.

241 citations

Journal ArticleDOI
TL;DR: A novel multi-class CNN, FocalNet, is proposed to jointly detect PCa lesions and predict their aggressiveness using Gleason score (GS), which characterizes lesion aggressiveness and fully utilizes distinctive knowledge from mp-MRI.
Abstract: Multi-parametric MRI (mp-MRI) is considered the best non-invasive imaging modality for diagnosing prostate cancer (PCa). However, mp-MRI for PCa diagnosis is currently limited by the qualitative or semi-quantitative interpretation criteria, leading to inter-reader variability and a suboptimal ability to assess lesion aggressiveness. Convolutional neural networks (CNNs) are a powerful method to automatically learn the discriminative features for various tasks, including cancer detection. We propose a novel multi-class CNN, FocalNet, to jointly detect PCa lesions and predict their aggressiveness using Gleason score (GS). FocalNet characterizes lesion aggressiveness and fully utilizes distinctive knowledge from mp-MRI. We collected a prostate mp-MRI dataset from 417 patients who underwent 3T mp-MRI exams prior to robotic-assisted laparoscopic prostatectomy. FocalNet was trained and evaluated in this large study cohort with fivefold cross validation. In the free-response receiver operating characteristics (FROC) analysis for lesion detection, FocalNet achieved 89.7% and 87.9% sensitivity for index lesions and clinically significant lesions at one false positive per patient, respectively. For the GS classification, evaluated by the receiver operating characteristics (ROC) analysis, FocalNet received the area under the curve of 0.81 and 0.79 for the classifications of clinically significant PCa (GS ≥ 3 + 4) and PCa with GS ≥ 4 + 3, respectively. With the comparison to the prospective performance of radiologists using the current diagnostic guideline, FocalNet demonstrated comparable detection sensitivity for index lesions and clinically significant lesions, only 3.4% and 1.5% lower than highly experienced radiologists without statistical significance.

142 citations

Journal ArticleDOI
TL;DR: In this paper, a novel convolutional neural network (CNN) was designed for automatic segmentation of the prostate transition zone (TZ) and peripheral zone (PZ) on T2-weighted (T2w) 3 Tesla (3T) MRI.
Abstract: Our main objective in the paper is to develop a novel deep learning-based algorithm for automatic segmentation of prostate zones and to evaluate the performance of the algorithm on an additional independent testing dataset in comparison with inter-reader agreement between two experts. With IRB approval and HIPAA compliance, we designed a novel convolutional neural network (CNN) for automatic segmentation of the prostatic transition zone (TZ) and peripheral zone (PZ) on T2-weighted (T2w) 3 Tesla (3T) MRI. The total study cohort included 359 MRI scans of patients in subcohorts; 313 scans from a deidentified publicly available dataset (SPIE-AAPM-NCI PROSTATEX challenge) and 46 scans from a large U.S. tertiary referral center (external testing dataset (ETD)). The TZ and PZ contours were manually annotated by research fellows, supervised by expert genitourinary (GU) radiologists. The model was developed using 250 patients and tested internally using the remaining 63 patients from the PROSTATEX (internal testing dataset (ITD)) and tested again (n=46) externally using the ETD. The Dice Similarity Coefficient (DSC) was used to evaluate the segmentation performance. DSCs for PZ and TZ were 0.74±0.08 and 0.86±0.07 in the ITD respectively. In the ETD, DSCs for PZ and TZ were 0.74±0.07 and 0.79±0.12, respectively. The inter-reader consistency (Expert 2 vs. Expert 1) were 0.71±0.13 (PZ) and 0.75±0.14 (TZ). This novel DL algorithm enabled automatic segmentation of PZ and TZ with high accuracy on both ITD and ETD without a performance difference for PZ and less than 10% TZ difference. In the ETD, the proposed method can be comparable to experts in the segmentation of prostate zones. Part of our source code and datasets with annotations is available at https://github.com/ykl-ucla/prostate_zonal_seg.

71 citations

Journal ArticleDOI
TL;DR: To measure and characterize variations in the transmitted radio frequency (RF) (B1+) field in cardiac magnetic resonance imaging (MRI) at 3 Tesla, knowledge of the B1+ field is necessary for the calibration of pulse sequences, image‐based quantitation, and signal‐to‐noise ratio (SNR) and contrast‐to-noise ratios (CNR) optimization.
Abstract: Purpose: To measure and characterize variations in the transmitted radio frequency (RF) (B1) field in cardiac magnetic resonance imaging (MRI) at 3 Tesla. Knowledge of the B1 field is necessary for the calibration of pulse sequences, image-based quantitation, and signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) optimization. Materials and Methods: A variation of the saturated double-angle method for cardiac B1 mapping is described. A total of eight healthy volunteers and two cardiac patients were scanned using six parallel short-axis slices spanning the left ventricle (LV). B1 profiles were analyzed to determine the amount of variation and dominant patterns of variation across the LV. A total of five to 10 measurements were obtained in each volunteer to determine an upper bound of measurement repeatability. Results: The amount of flip angle variation was found to be 23% to 48% over the LV in mid-short-axis slices and 32% to 63% over the entire LV volume. The standard deviation (SD) of multiple flip angle measurements was 1.4° over the LV in all subjects, indicating excellent repeatability of the proposed measurement method. The pattern of in-plane flip angle variation was found to be primarily unidirectional across the LV, with a residual variation of 3% in all subjects. Conclusion: The in-plane B1 variation over the LV at 3T with body-coil transmission is on the order of 32% to 63% and is predominantly unidirectional in short-axis slices. Reproducible B1 measurements over the whole heart can be obtained in a single breathhold of 16 heartbeats.

69 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors proposed a deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems, which combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure.
Abstract: In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator ( $H^{*}H$ , where $H^{*}$ is the adjoint of the forward imaging operator, $H$ ) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a $512\times 512$ image on the GPU.

1,757 citations

Journal ArticleDOI
TL;DR: Dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor using the proposed adaptive dictionary as compared to previous CS methods are demonstrated.
Abstract: Compressed sensing (CS) utilizes the sparsity of magnetic resonance (MR) images to enable accurate reconstruction from undersampled k-space data. Recent CS methods have employed analytical sparsifying transforms such as wavelets, curvelets, and finite differences. In this paper, we propose a novel framework for adaptively learning the sparsifying transform (dictionary), and reconstructing the image simultaneously from highly undersampled k-space data. The sparsity in this framework is enforced on overlapping image patches emphasizing local structure. Moreover, the dictionary is adapted to the particular image instance thereby favoring better sparsities and consequently much higher undersampling rates. The proposed alternating reconstruction algorithm learns the sparsifying dictionary, and uses it to remove aliasing and noise in one step, and subsequently restores and fills-in the k-space data in the other step. Numerical experiments are conducted on MR images and on real MR data of several anatomies with a variety of sampling schemes. The results demonstrate dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor using the proposed adaptive dictionary as compared to previous CS methods. These improvements persist over a wide range of practical data signal-to-noise ratios, without any parameter tuning.

1,015 citations

Journal ArticleDOI
01 Oct 2019
TL;DR: A major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample, which limits reliable interpretation of the reported diagnostic accuracy.
Abstract: Summary Background Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging. Methods In this systematic review and meta-analysis, we searched Ovid-MEDLINE, Embase, Science Citation Index, and Conference Proceedings Citation Index for studies published from Jan 1, 2012, to June 6, 2019. Studies comparing the diagnostic performance of deep learning models and health-care professionals based on medical imaging, for any disease, were included. We excluded studies that used medical waveform data graphics material or investigated the accuracy of image segmentation rather than disease classification. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. Studies undertaking an out-of-sample external validation were included in a meta-analysis, using a unified hierarchical model. This study is registered with PROSPERO, CRD42018091176. Findings Our search identified 31 587 studies, of which 82 (describing 147 patient cohorts) were included. 69 studies provided enough data to construct contingency tables, enabling calculation of test accuracy, with sensitivity ranging from 9·7% to 100·0% (mean 79·1%, SD 0·2) and specificity ranging from 38·9% to 100·0% (mean 88·3%, SD 0·1). An out-of-sample external validation was done in 25 studies, of which 14 made the comparison between deep learning models and health-care professionals in the same sample. Comparison of the performance between health-care professionals in these 14 studies, when restricting the analysis to the contingency table for each study reporting the highest accuracy, found a pooled sensitivity of 87·0% (95% CI 83·0–90·2) for deep learning models and 86·4% (79·9–91·0) for health-care professionals, and a pooled specificity of 92·5% (95% CI 85·1–96·4) for deep learning models and 90·5% (80·6–95·7) for health-care professionals. Interpretation Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals. However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology. Funding None.

850 citations

01 Jan 2016
TL;DR: This book helps people to enjoy a good book with a cup of coffee in the afternoon, instead they juggled with some malicious bugs inside their laptop.
Abstract: Thank you for downloading magnetic resonance imaging physical principles and sequence design. As you may know, people have look numerous times for their chosen books like this magnetic resonance imaging physical principles and sequence design, but end up in harmful downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they juggled with some malicious bugs inside their laptop.

695 citations

Journal ArticleDOI
TL;DR: This paper indicates how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction, and provides a starting point for people interested in experimenting and contributing to the field of deep learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

590 citations