scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Super-resolution musculoskeletal MRI using deep learning.

TL;DR: To develop a super‐resolution technique using convolutional neural networks for generating thin‐slice knee MR images from thicker input slices, and compare this method with alternative through‐plane interpolation methods.
Abstract: PURPOSE To develop a super-resolution technique using convolutional neural networks for generating thin-slice knee MR images from thicker input slices, and compare this method with alternative through-plane interpolation methods. METHODS We implemented a 3D convolutional neural network entitled DeepResolve to learn residual-based transformations between high-resolution thin-slice images and lower-resolution thick-slice images at the same center locations. DeepResolve was trained using 124 double echo in steady-state (DESS) data sets with 0.7-mm slice thickness and tested on 17 patients. Ground-truth images were compared with DeepResolve, clinically used tricubic interpolation, and Fourier interpolation methods, along with state-of-the-art single-image sparse-coding super-resolution. Comparisons were performed using structural similarity, peak SNR, and RMS error image quality metrics for a multitude of thin-slice downsampling factors. Two musculoskeletal radiologists ranked the 3 data sets and reviewed the diagnostic quality of the DeepResolve, tricubic interpolation, and ground-truth images for sharpness, contrast, artifacts, SNR, and overall diagnostic quality. Mann-Whitney U tests evaluated differences among the quantitative image metrics, reader scores, and rankings. Cohen's Kappa (κ) evaluated interreader reliability. RESULTS DeepResolve had significantly better structural similarity, peak SNR, and RMS error than tricubic interpolation, Fourier interpolation, and sparse-coding super-resolution for all downsampling factors (p < .05, except 4 × and 8 × sparse-coding super-resolution downsampling factors). In the reader study, DeepResolve significantly outperformed (p < .01) tricubic interpolation in all image quality categories and overall image ranking. Both readers had substantial scoring agreement (κ = 0.73). CONCLUSION DeepResolve was capable of resolving high-resolution thin-slice knee MRI from lower-resolution thicker slices, achieving superior quantitative and qualitative diagnostic performance to both conventionally used and state-of-the-art methods.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis, and provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

991 citations

Journal ArticleDOI
TL;DR: This paper indicates how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction, and provides a starting point for people interested in experimenting and contributing to the field of deep learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

590 citations


Cites methods from "Super-resolution musculoskeletal MR..."

  • ...generating super-resolution single (no reference information) and multi-contrast (applying a high-resolution image of another modality as reference) brain MR images using CNNs [196]; constructing superresolution brain MRI by a CNN stacked by multi-scale fusion units [197]; and super-resolution musculoskeletal MRI (“DeepResolve”) [198]....

    [...]

Journal ArticleDOI
TL;DR: Deep learning is a branch of artificial intelligence where networks of simple interconnected units are used to extract patterns from data in order to solve complex problems as mentioned in this paper, and it has shown promising performance in a variety of sophisticated tasks, especially those related to images.
Abstract: Deep learning is a branch of artificial intelligence where networks of simple interconnected units are used to extract patterns from data in order to solve complex problems. Deep-learning algorithms have shown groundbreaking performance in a variety of sophisticated tasks, especially those related to images. They have often matched or exceeded human performance. Since the medical field of radiology mainly relies on extracting useful information from images, it is a very natural application area for deep learning, and research in this area has rapidly grown in recent years. In this article, we discuss the general context of radiology and opportunities for application of deep-learning algorithms. We also introduce basic concepts of deep learning, including convolutional neural networks. Then, we present a survey of the research in deep learning applied to radiology. We organize the studies by the types of specific tasks that they attempt to solve and review a broad range of deep-learning algorithms being utilized. Finally, we briefly discuss opportunities and challenges for incorporating deep learning in the radiology practice of the future. Level of Evidence: 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;49:939-954.

246 citations

Journal ArticleDOI
TL;DR: In this article, a semi-supervised deep learning approach was proposed to recover high-resolution (HR) CT images from low resolution (LR) counterparts by enforcing the cycle-consistency in terms of Wasserstein distance to establish a nonlinear end-to-end mapping from noisy LR input images to denoised and deblurred HR outputs.
Abstract: Computed tomography (CT) is widely used in screening, diagnosis, and image-guided therapy for both clinical and research purposes. Since CT involves ionizing radiation, an overarching thrust of related technical research is development of novel methods enabling ultrahigh quality imaging with fine structural details while reducing the X-ray radiation. In this paper, we present a semi-supervised deep learning approach to accurately recover high-resolution (HR) CT images from low-resolution (LR) counterparts. Specifically, with the generative adversarial network (GAN) as the building block, we enforce the cycle-consistency in terms of the Wasserstein distance to establish a nonlinear end-to-end mapping from noisy LR input images to denoised and deblurred HR outputs. We also include the joint constraints in the loss function to facilitate structural preservation. In this deep imaging process, we incorporate deep convolutional neural network (CNN), residual learning, and network in network techniques for feature extraction and restoration. In contrast to the current trend of increasing network depth and complexity to boost the CT imaging performance, which limit its real-world applications by imposing considerable computational and memory overheads, we apply a parallel $1\times1$ CNN to compress the output of the hidden layer and optimize the number of layers and the number of filters for each convolutional layer. Quantitative and qualitative evaluations demonstrate that our proposed model is accurate, efficient and robust for super-resolution (SR) image restoration from noisy LR input images. In particular, we validate our composite SR networks on three large-scale CT datasets, and obtain promising results as compared to the other state-of-the-art methods.

242 citations

Posted Content
TL;DR: The general context of radiology and opportunities for application of deep‐learning algorithms and basic concepts of deep learning are discussed, including convolutional neural networks and a survey of the research in deep learning applied to radiology are presented.
Abstract: Deep learning is a branch of artificial intelligence where networks of simple interconnected units are used to extract patterns from data in order to solve complex problems. Deep learning algorithms have shown groundbreaking performance in a variety of sophisticated tasks, especially those related to images. They have often matched or exceeded human performance. Since the medical field of radiology mostly relies on extracting useful information from images, it is a very natural application area for deep learning, and research in this area has rapidly grown in recent years. In this article, we review the clinical reality of radiology and discuss the opportunities for application of deep learning algorithms. We also introduce basic concepts of deep learning including convolutional neural networks. Then, we present a survey of the research in deep learning applied to radiology. We organize the studies by the types of specific tasks that they attempt to solve and review the broad range of utilized deep learning algorithms. Finally, we briefly discuss opportunities and challenges for incorporating deep learning in the radiology practice of the future.

201 citations

References
More filters
Proceedings Article
01 Jan 2015
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

111,197 citations


"Super-resolution musculoskeletal MR..." refers methods in this paper

  • ...DeepResolve training was performed on 124 data sets using stochastic gradient descent using Keras and a Tensorflow backend (Google, Mountain View, CA).(34,35) Hyperparameter optimization was performed by minimizing L2 loss on the 35 validation data sets with a learning rate of 0....

    [...]

Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
TL;DR: A general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies is presented and tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interob server agreement are developed as generalized kappa-type statistics.
Abstract: This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.

64,109 citations


"Super-resolution musculoskeletal MR..." refers methods in this paper

  • ...99).(41) FIGURE 5 Example coronal ground-truth, DeepResolve, and TCI images are shown (A-C) in addition to the generated DeepResolve residual map (D) in comparison with the actual residual between the ground truth and TCI (E)....

    [...]

  • ...A linearly-weighted Cohen’s kappa (j) and its confidence interval were calculated to evaluate interreader variability for the image quality ratings and the image rankings.(41,42) All numerical analysis and data preprocessing were performed using MATLAB....

    [...]

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations


"Super-resolution musculoskeletal MR..." refers background in this paper

  • ...It has been shown that filters with larger supports can be effectively decomposed into several smaller filters, introducing unnecessary redundancy during the network training process.(50) As a result, a small convolution kernel efficiently maximized high-resolution feature extraction....

    [...]

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations