scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Medical imaging and computational image analysis in COVID-19 diagnosis: A review.

TL;DR: This study attempts to review papers on the role of imaging and medical image computing in COVID-19 diagnosis and expresses the research limitations in this field and the methods used to overcome them.
About: This article is published in Computers in Biology and Medicine.The article was published on 2021-06-23 and is currently open access. It has received 23 citations till now. The article focuses on the topics: Medical imaging.
Citations
More filters
Journal Article
TL;DR: The early chest CT images of children with 2019-nCoV infection are mostly small nodular ground glass opacities, and Dynamic reexamination of chest CT and nucleic acid are important.
Abstract: Objective@#To explore imaging characteristics of children with 2019 novel coronavirus (2019-nCoV) infection.@*Methods@#A retrospective analysis was performed on clinical data and chest CT images of 15 children diagnosed with 2019-nCoV. They were admitted to the third people’s Hospital of Shenzhen from January 16 to February 6, 2020. The distribution and morphology of pulmonary lesions on chest CT images were analyzed.@*Results@#Among the 15 children, there were 5 males and 10 females, aged from 4 to 14 years old. Five of the 15 children were febrile and 10 were asymptomatic on first visit. The first nasal or pharyngeal swab samples in all the 15 cases were positive for 2019-nCoV nucleic acid. For their first chest CT images, 6 patients had no lesions, while 9 patients had pulmonary inflammation lesions. Seven cases of small nodular ground glass opacities and 2 cases of speckled ground glass opacities were found. After 3 to 5 days of treatment, 2019-nCoV nucleic acid in a second respiratory sample turned negative in 6 cases. Among them, chest CT images showed less lesions in 2 cases, no lesion in 3 cases, and no improvement in 1 case. Other 9 cases were still positive in a second nucleic acid test. Six patients showed similar chest CT inflammation, while 3 patients had new lesions, which were all small nodular ground glass opacities.@*Conclusions@#The early chest CT images of children with 2019-nCoV infection are mostly small nodular ground glass opacities. The clinical symptoms of children with 2019-nCoV infection are nonspecific. Dynamic reexamination of chest CT and nucleic acid are important.

111 citations

Journal ArticleDOI
TL;DR: The journal offers information for the nuclear medicine community and allied professions involved in the functional, metabolic and molecular investigation of disease and presents in-depth reviews, short communications, controversies, interesting images and letters to the Editor.
Abstract: ▶ Official Journal of the European Association of Nuclear Medicine (EANM) ▶ Offers information for the nuclear medicine community and allied professions involved in the functional, metabolic and molecular investigation of disease ▶ Coverage extends to physics, dosimetry, radiation biology, radiochemistry and pharmacy ▶ Presents in-depth reviews, short communications, controversies, interesting images and letters to the Editor ▶ 96% of authors who answered a survey reported that they would definitely publish or probably publish in the journal again

60 citations

Journal ArticleDOI
TL;DR: In this paper, a self-supervised deep neural network that is pretrained on an unlabeled chest X-ray dataset is used for classification of pneumonia and different pneumonia types.
Abstract: Chest radiography is a relatively cheap, widely available medical procedure that conveys key information for making diagnostic decisions. Chest X-rays are frequently used in the diagnosis of respiratory diseases such as pneumonia or COVID-19. In this paper, we propose a self-supervised deep neural network that is pretrained on an unlabeled chest X-ray dataset. Pretraining is achieved through the contrastive learning approach by comparing representations of differently augmented input images. The learned representations are transferred to downstream tasks – the classification of respiratory diseases. We evaluate the proposed approach on two tasks for pneumonia classification, one for COVID-19 recognition and one for discrimination of different pneumonia types. The results show that our approach yields competitive results without requiring large amounts of labeled training data.

23 citations

Journal ArticleDOI
TL;DR: An integrated method for selecting the optimal deep learning model based on a novel crow swarm optimization algorithm for COVID-19 diagnosis using a designed fitness function for evaluating the performance of the deep learning models is proposed.
Abstract: Due to the COVID-19 pandemic, computerized COVID-19 diagnosis studies are proliferating. The diversity of COVID-19 models raises the questions of which COVID-19 diagnostic model should be selected and which decision-makers of healthcare organizations should consider performance criteria. Because of this, a selection scheme is necessary to address all the above issues. This study proposes an integrated method for selecting the optimal deep learning model based on a novel crow swarm optimization algorithm for COVID-19 diagnosis. The crow swarm optimization is employed to find an optimal set of coefficients using a designed fitness function for evaluating the performance of the deep learning models. The crow swarm optimization is modified to obtain a good selected coefficient distribution by considering the best average fitness. We have utilized two datasets: the first dataset includes 746 computed tomography images, 349 of them are of confirmed COVID-19 cases and the other 397 are of healthy individuals, and the second dataset are composed of unimproved computed tomography images of the lung for 632 positive cases of COVID-19 with 15 trained and pretrained deep learning models with nine evaluation metrics are used to evaluate the developed methodology. Among the pretrained CNN and deep models using the first dataset, ResNet50 has an accuracy of 91.46% and a F1-score of 90.49%. For the first dataset, the ResNet50 algorithm is the optimal deep learning model selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5715.988 for COVID-19 computed tomography lung images case considered differential advancement. In contrast, the VGG16 algorithm is the optimal deep learning model is selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5758.791 for the second dataset. Overall, InceptionV3 had the lowest performance for both datasets. The proposed evaluation methodology is a helpful tool to assist healthcare managers in selecting and evaluating the optimal COVID-19 diagnosis models based on deep learning.

17 citations

Journal ArticleDOI
TL;DR: In this paper , the authors proposed an extension to the widespread FL process, namely flexible federated learning (FFL) for collaborative training on such data, and demonstrated that having heterogeneously labeled datasets, FFL-based training leads to significant performance increase compared to conventional FL training, where only the uniformly annotated images are utilized.
Abstract: Due to the rapid advancements in recent years, medical image analysis is largely dominated by deep learning (DL). However, building powerful and robust DL models requires training with large multi-party datasets. While multiple stakeholders have provided publicly available datasets, the ways in which these data are labeled vary widely. For Instance, an institution might provide a dataset of chest radiographs containing labels denoting the presence of pneumonia, while another institution might have a focus on determining the presence of metastases in the lung. Training a single AI model utilizing all these data is not feasible with conventional federated learning (FL). This prompts us to propose an extension to the widespread FL process, namely flexible federated learning (FFL) for collaborative training on such data. Using 695,000 chest radiographs from five institutions from across the globe - each with differing labels - we demonstrate that having heterogeneously labeled datasets, FFL-based training leads to significant performance increase compared to conventional FL training, where only the uniformly annotated images are utilized. We believe that our proposed algorithm could accelerate the process of bringing collaborative training methods from research and simulation phase to the real-world applications in healthcare.

6 citations

References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract: Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

27,821 citations

Proceedings ArticleDOI
François Chollet1
21 Jul 2017
TL;DR: This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset.
Abstract: We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.

10,422 citations


"Medical imaging and computational i..." refers methods in this paper

  • ...ResNet-101 and Xception show the best performance....

    [...]

  • ...Some pre-trained CNN models including Xception [109], VGG16 and VGG-19 [110], ResNet-50 [111], DenseNet-121 and DenseNet-169 [112], and classifiers...

    [...]

  • ...Some pre-trained CNN models including Xception [109], VGG16 and VGG-19 [110], ResNet-50 [111], DenseNet-121 and DenseNet-169 [112], and classifiers including Random Forest (RF), K-nearest neighbours (KNN), Naive Bayes and SVM have been evaluated in the study....

    [...]