scispace - formally typeset
Search or ask a question
Author

Eric K. Gibbons

Other affiliations: University of Utah
Bio: Eric K. Gibbons is an academic researcher from Stanford University. The author has contributed to research in topics: Imaging phantom & Signal-to-noise ratio (imaging). The author has an hindex of 4, co-authored 7 publications receiving 250 citations. Previous affiliations of Eric K. Gibbons include University of Utah.

Papers
More filters
Journal ArticleDOI
TL;DR: To develop a super‐resolution technique using convolutional neural networks for generating thin‐slice knee MR images from thicker input slices, and compare this method with alternative through‐plane interpolation methods.
Abstract: PURPOSE To develop a super-resolution technique using convolutional neural networks for generating thin-slice knee MR images from thicker input slices, and compare this method with alternative through-plane interpolation methods. METHODS We implemented a 3D convolutional neural network entitled DeepResolve to learn residual-based transformations between high-resolution thin-slice images and lower-resolution thick-slice images at the same center locations. DeepResolve was trained using 124 double echo in steady-state (DESS) data sets with 0.7-mm slice thickness and tested on 17 patients. Ground-truth images were compared with DeepResolve, clinically used tricubic interpolation, and Fourier interpolation methods, along with state-of-the-art single-image sparse-coding super-resolution. Comparisons were performed using structural similarity, peak SNR, and RMS error image quality metrics for a multitude of thin-slice downsampling factors. Two musculoskeletal radiologists ranked the 3 data sets and reviewed the diagnostic quality of the DeepResolve, tricubic interpolation, and ground-truth images for sharpness, contrast, artifacts, SNR, and overall diagnostic quality. Mann-Whitney U tests evaluated differences among the quantitative image metrics, reader scores, and rankings. Cohen's Kappa (κ) evaluated interreader reliability. RESULTS DeepResolve had significantly better structural similarity, peak SNR, and RMS error than tricubic interpolation, Fourier interpolation, and sparse-coding super-resolution for all downsampling factors (p < .05, except 4 × and 8 × sparse-coding super-resolution downsampling factors). In the reader study, DeepResolve significantly outperformed (p < .01) tricubic interpolation in all image quality categories and overall image ranking. Both readers had substantial scoring agreement (κ = 0.73). CONCLUSION DeepResolve was capable of resolving high-resolution thin-slice knee MRI from lower-resolution thicker slices, achieving superior quantitative and qualitative diagnostic performance to both conventionally used and state-of-the-art methods.

243 citations

Journal ArticleDOI
TL;DR: To develop a robust multidimensional deep‐learning based method to simultaneously generate accurate neurite orientation dispersion and density imaging (NODDI) and generalized fractional anisotropy (GFA) parameter maps from undersampled q‐space datasets for use in stroke imaging.
Abstract: Purpose To develop a robust multidimensional deep-learning based method to simultaneously generate accurate neurite orientation dispersion and density imaging (NODDI) and generalized fractional anisotropy (GFA) parameter maps from undersampled q-space datasets for use in stroke imaging. Methods Traditional diffusion spectrum imaging (DSI) capable of producing accurate NODDI and GFA parameter maps requires hundreds of q-space samples which renders the scan time clinically untenable. A convolutional neural network (CNN) was trained to generated NODDI and GFA parameter maps simultaneously from 10× undersampled q-space data. A total of 48 DSI scans from 15 stroke patients and 14 normal subjects were acquired for training, validating, and testing this method. The proposed network was compared to previously proposed voxel-wise machine learning based approaches for q-space imaging. Network-generated images were used to predict stroke functional outcome measures. Results The proposed network achieves significant performance advantages compared to previously proposed machine learning approaches, showing significant improvements across image quality metrics. Generating these parameter maps using CNNs also comes with the computational benefits of only needing to generate and train a single network instead of multiple networks for each parameter type. Post-stroke outcome prediction metrics do not appreciably change when using images generated from this proposed technique. Over three test participants, the predicted stroke functional outcome scores were within 1-6% of the clinical evaluations. Conclusions Estimates of NODDI and GFA parameters estimated simultaneously with a deep learning network from highly undersampled q-space data were improved compared to other state-of-the-art methods providing a 10-fold reduction scan time compared to conventional methods.

51 citations

Journal ArticleDOI
TL;DR: This work has shown that super‐resolution is an emerging method for enhancing MRI resolution and its impact on image quality is still unknown.
Abstract: BACKGROUND Super-resolution is an emerging method for enhancing MRI resolution; however, its impact on image quality is still unknown. PURPOSE To evaluate MRI super-resolution using quantitative and qualitative metrics of cartilage morphometry, osteophyte detection, and global image blurring. STUDY TYPE Retrospective. POPULATION In all, 176 MRI studies of subjects at varying stages of osteoarthritis. FIELD STRENGTH/SEQUENCE Original-resolution 3D double-echo steady-state (DESS) and DESS with 3× thicker slices retrospectively enhanced using super-resolution and tricubic interpolation (TCI) at 3T. ASSESSMENT A quantitative comparison of femoral cartilage morphometry was performed for the original-resolution DESS, the super-resolution, and the TCI scans in 17 subjects. A reader study by three musculoskeletal radiologists assessed cartilage image quality, overall image sharpness, and osteophytes incidence in all three sets of scans. A referenceless blurring metric evaluated blurring in all three image dimensions for the three sets of scans. STATISTICAL TESTS Mann-Whitney U-tests compared Dice coefficients (DC) of segmentation accuracy for the DESS, super-resolution, and TCI images, along with the image quality readings and blurring metrics. Sensitivity, specificity, and diagnostic odds ratio (DOR) with 95% confidence intervals compared osteophyte detection for the super-resolution and TCI images, with the original-resolution as a reference. RESULTS DC for the original-resolution (90.2 ± 1.7%) and super-resolution (89.6 ± 2.0%) were significantly higher (P < 0.001) than TCI (86.3 ± 5.6%). Segmentation overlap of super-resolution with the original-resolution (DC = 97.6 ± 0.7%) was significantly higher (P < 0.0001) than TCI overlap (DC = 95.0 ± 1.1%). Cartilage image quality for sharpness and contrast levels, and the through-plane quantitative blur factor for super-resolution images, was significantly (P < 0.001) better than TCI. Super-resolution osteophyte detection sensitivity of 80% (76-82%), specificity of 93% (92-94%), and DOR of 32 (22-46) was significantly higher (P < 0.001) than TCI sensitivity of 73% (69-76%), specificity of 90% (89-91%), and DOR of 17 (13-22). DATA CONCLUSION Super-resolution appears to consistently outperform naive interpolation and may improve image quality without biasing quantitative biomarkers. LEVEL OF EVIDENCE 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;51:768-779.

47 citations

Journal ArticleDOI
TL;DR: This method has the ability to capture distortion-free DWI images near areas of significant off-resonance as well as preserve adequate SNR and Parallel imaging and DIVERSE refocusing RF pulses allow shorter ETL compared to previous implementations and thus reduces phase encode direction blur and SAR accumulation.
Abstract: SS-FSE is a fast technique that does not suffer from off-resonance distortions to the degree that EPI does. Unlike EPI, SS-FSE is ill-suited to diffusion weighted imaging (DWI) due to the Carr-Purcell-Meiboom-Geill (CPMG) condition. Non-CPMG phase cycling does accommodate SS-FSE and DWI but places constraints on reconstruction, which are resolved here through parallel imaging. Additionally, improved echo stability can be achieved by using short duration and highly selective DIVERSE radiofrequency pulses. Here, signal-to-noise ratio (SNR) comparisons between EPI and nCPMG SS-FSE acquisitions and reconstruction techniques give similar values. Diffusion imaging with nCPMG SS-FSE gives similar SNR to an EPI acquisition, though apparent diffusion coefficient values are higher than seen with EPI. In vivo images have good image quality with little distortion. This method has the ability to capture distortion-free DWI images near areas of significant off-resonance as well as preserve adequate SNR. Parallel imaging and DIVERSE refocusing RF pulses allow shorter ETL compared to previous implementations and thus reduces phase encode direction blur and SAR accumulation.

9 citations

Journal ArticleDOI
TL;DR: A magnetization prepared diffusion‐weighted single‐shot fast spin echo (SS‐FSE) pulse sequence for the application of body imaging to improve robustness to geometric distortion and a scan averaging technique that is superior to magnitude averaging and is not subject to artifacts due to object phase is proposed.
Abstract: Purpose This work demonstrates a magnetization prepared diffusion-weighted single-shot fast spin echo (SS-FSE) pulse sequence for the application of body imaging to improve robustness to geometric distortion. This work also proposes a scan averaging technique that is superior to magnitude averaging and is not subject to artifacts due to object phase. Theory and methods This single-shot sequence is robust against violation of the Carr-Purcell-Meiboom-Gill (CPMG) condition. This is achieved by dephasing the signal after diffusion weighting and tipping the MG component of the signal onto the longitudinal axis while the non-MG component is spoiled. The MG signal component is then excited and captured using a traditional SS-FSE sequence, although the echo needs to be recalled prior to each echo. Extended Parallel Imaging (ExtPI) averaging is used where coil sensitivities from the multiple acquisitions are concatenated into one large parallel imaging (PI) problem. The size of the PI problem is reduced by SVD-based coil compression which also provides background noise suppression. This sequence and reconstruction are evaluated in simulation, phantom scans, and in vivo abdominal clinical cases. Results Simulations show that the sequence generates a stable signal throughout the echo train which leads to good image quality. This sequence is inherently low-SNR, but much of the SNR can be regained through scan averaging and the proposed ExtPI reconstruction. In vivo results show that the proposed method is able to provide diffusion encoded images while mitigating geometric distortion artifacts compared to EPI. Conclusion This work presents a diffusion-prepared SS-FSE sequence that is robust against the violation of the CPMG condition while providing diffusion contrast in clinical cases. Magn Reson Med 79:3032-3044, 2018. © 2017 International Society for Magnetic Resonance in Medicine.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis, and provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

991 citations

Journal ArticleDOI
TL;DR: This paper indicates how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction, and provides a starting point for people interested in experimenting and contributing to the field of deep learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

590 citations

Journal ArticleDOI
TL;DR: Deep learning is a branch of artificial intelligence where networks of simple interconnected units are used to extract patterns from data in order to solve complex problems as mentioned in this paper, and it has shown promising performance in a variety of sophisticated tasks, especially those related to images.
Abstract: Deep learning is a branch of artificial intelligence where networks of simple interconnected units are used to extract patterns from data in order to solve complex problems. Deep-learning algorithms have shown groundbreaking performance in a variety of sophisticated tasks, especially those related to images. They have often matched or exceeded human performance. Since the medical field of radiology mainly relies on extracting useful information from images, it is a very natural application area for deep learning, and research in this area has rapidly grown in recent years. In this article, we discuss the general context of radiology and opportunities for application of deep-learning algorithms. We also introduce basic concepts of deep learning, including convolutional neural networks. Then, we present a survey of the research in deep learning applied to radiology. We organize the studies by the types of specific tasks that they attempt to solve and review a broad range of deep-learning algorithms being utilized. Finally, we briefly discuss opportunities and challenges for incorporating deep learning in the radiology practice of the future. Level of Evidence: 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;49:939-954.

246 citations

Journal ArticleDOI
TL;DR: In this article, a semi-supervised deep learning approach was proposed to recover high-resolution (HR) CT images from low resolution (LR) counterparts by enforcing the cycle-consistency in terms of Wasserstein distance to establish a nonlinear end-to-end mapping from noisy LR input images to denoised and deblurred HR outputs.
Abstract: Computed tomography (CT) is widely used in screening, diagnosis, and image-guided therapy for both clinical and research purposes. Since CT involves ionizing radiation, an overarching thrust of related technical research is development of novel methods enabling ultrahigh quality imaging with fine structural details while reducing the X-ray radiation. In this paper, we present a semi-supervised deep learning approach to accurately recover high-resolution (HR) CT images from low-resolution (LR) counterparts. Specifically, with the generative adversarial network (GAN) as the building block, we enforce the cycle-consistency in terms of the Wasserstein distance to establish a nonlinear end-to-end mapping from noisy LR input images to denoised and deblurred HR outputs. We also include the joint constraints in the loss function to facilitate structural preservation. In this deep imaging process, we incorporate deep convolutional neural network (CNN), residual learning, and network in network techniques for feature extraction and restoration. In contrast to the current trend of increasing network depth and complexity to boost the CT imaging performance, which limit its real-world applications by imposing considerable computational and memory overheads, we apply a parallel $1\times1$ CNN to compress the output of the hidden layer and optimize the number of layers and the number of filters for each convolutional layer. Quantitative and qualitative evaluations demonstrate that our proposed model is accurate, efficient and robust for super-resolution (SR) image restoration from noisy LR input images. In particular, we validate our composite SR networks on three large-scale CT datasets, and obtain promising results as compared to the other state-of-the-art methods.

242 citations

Posted Content
TL;DR: The general context of radiology and opportunities for application of deep‐learning algorithms and basic concepts of deep learning are discussed, including convolutional neural networks and a survey of the research in deep learning applied to radiology are presented.
Abstract: Deep learning is a branch of artificial intelligence where networks of simple interconnected units are used to extract patterns from data in order to solve complex problems. Deep learning algorithms have shown groundbreaking performance in a variety of sophisticated tasks, especially those related to images. They have often matched or exceeded human performance. Since the medical field of radiology mostly relies on extracting useful information from images, it is a very natural application area for deep learning, and research in this area has rapidly grown in recent years. In this article, we review the clinical reality of radiology and discuss the opportunities for application of deep learning algorithms. We also introduce basic concepts of deep learning including convolutional neural networks. Then, we present a survey of the research in deep learning applied to radiology. We organize the studies by the types of specific tasks that they attempt to solve and review the broad range of utilized deep learning algorithms. Finally, we briefly discuss opportunities and challenges for incorporating deep learning in the radiology practice of the future.

201 citations