scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Determination of COVID-19 Pneumonia based on Generalized Convolutional Neural Network Model from Chest X-Ray Images

TL;DR: A transfer learning-based CNN model was developed by using a sum of 1,218 chest X-ray images (CXIs) consisting of 368 COVID-19 pneumonia and 850 other pneumonia cases by pre-trained architectures, including DenseNet-201, ResNet-18 and SqueezeNet.
Abstract: X-ray units have become one of the most advantageous candidates for triaging the new Coronavirus disease COVID-19 infected patients thanks to its relatively low radiation dose, ease of access, practical, reduced prices, and quick imaging process. This research intended to develop a reliable convolutional-neural-network (CNN) model for the classification of COVID-19 from chest X-ray views. Moreover, it is aimed to prevent bias issues due to the database. Transfer learning-based CNN model was developed by using a sum of 1,218 chest X-ray images (CXIs) consisting of 368 COVID-19 pneumonia and 850 other pneumonia cases by pre-trained architectures, including DenseNet-201, ResNet-18, and SqueezeNet. The chest X-ray images were acquired from publicly available databases, and each individual image was carefully selected to prevent any bias problem. A stratified 5-fold cross-validation approach was utilized with a ratio of 90% for training and 10% for the testing (unseen folds), in which 20% of training data was used as a validation set to prevent overfitting problems. The binary classification performances of the proposed CNN models were evaluated by the testing data. The activation mapping approach was implemented to improve the causality and visuality of the radiograph. The outcomes demonstrated that the proposed CNN model built on DenseNet-201 architecture outperformed amongst the others with the highest accuracy, precision, recall, and F1-scores of 94.96%, 89.74%, 94.59%, and 92.11%, respectively. The results indicated that the reliable diagnosis of COVID-19 pneumonia from CXIs based on the CNN model opens the door to accelerate triage, save critical time, and prioritize resources besides assisting the radiologists.
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, a sensitive and fast sandwich-type electrochemical SARS-CoV-2 (COVID-19) nucleocapsid protein immunosensor was prepared based on bismuth tungstate/bismuth sulfide composite (Bi2WO6/Bi2S3) as electrode platform.
Abstract: A sensitive and fast sandwich-type electrochemical SARS-CoV‑2 (COVID-19) nucleocapsid protein immunosensor was prepared based on bismuth tungstate/bismuth sulfide composite (Bi2WO6/Bi2S3) as electrode platform and graphitic carbon nitride sheet decorated with gold nanoparticles (Au NPs) and tungsten trioxide sphere composite (g-C3N4/Au/WO3) as signal amplification. The electrostatic interactions between capture antibody and Bi2WO6/Bi2S3 led to immobilization of the capture nucleocapsid antibody. The detection antibody was then conjugated to g-C3N4/Au/WO3 via the affinity of amino-gold. After physicochemically characterization via transmission electron microscopy (TEM), scanning electron microscopy (SEM), x-ray diffraction (XRD), and x-ray photoelectron spectroscopy (XPS), cyclic voltammetry (CV), differential pulse voltammetry (DPV), and electrochemical impedance spectroscopy (EIS) analysis were implemented to evaluate the electrochemical performance of the prepared immunosensor. The detection of SARS-CoV-2 nucleocapsid protein (SARS-CoV-2 NP) in a small saliva sample (100.0 µL) took just 30 min and yielded a detection limit (LOD) of 3.00 fg mL−1, making it an effective tool for point-of-care COVID-19 testing.

32 citations

Journal ArticleDOI
TL;DR: In this article, a voltammetric nanosensor was developed for trace level monitoring of favipiravir based on gold/silver core-shell nanoparticles (Au@Ag CSNPs) with conductive polymer poly (3,4-ethylene dioxythiophene) polystyrene sulfonate (PEDOT:PSS) and functionalized multi carbon nanotubes (F-MWCNTs) on a glassy carbon electrode (GCE).
Abstract: A novel and sensitive voltammetric nanosensor was developed for the first time for trace level monitoring of favipiravir based on gold/silver core–shell nanoparticles (Au@Ag CSNPs) with conductive polymer poly (3,4-ethylene dioxythiophene) polystyrene sulfonate (PEDOT:PSS) and functionalized multi carbon nanotubes (F-MWCNTs) on a glassy carbon electrode (GCE). The formation of Au@Ag CSNPs/PEDOT:PSS/F-MWCNT composite was confirmed by various analytical techniques, including X-ray diffraction (XRD), ultraviolet–visible spectroscopy (UV–Vis), transmission electron microscopy (TEM), energy-dispersive X-ray spectroscopy (EDX), and field-emission scanning electron microscopy (SEM). Under the optimized conditions and at a typical working potential of + 1.23 V (vs. Ag/AgCl), the Au@Ag CSNPs/PEDOT:PSS/F-MWCNT/GCE revealed linear quantitative ranges from 0.005 to 0.009 and 0.009 to 1.95 µM with a limit of detection 0.46 nM (S/N = 3) with acceptable relative standard deviations (1.1-4.9 %) for pharmaceutical formulations, urine, and human plasma samples without applying any sample pretreatment (1.12–4.93%). The interference effect of antiviral drugs, biological compounds, and amino acids was negligible, and the sensing system demonstrated outstanding reproducibility, repeatability, stability, and reusability. The findings revealed that this assay strategy has promising applications in diagnosing FAV in clinical samples, which could be attributed to the large surface area on active sites and high conductivity of bimetallic nanocomposite.

28 citations

Journal ArticleDOI
TL;DR: A new approach based on an evidence based fusion theory for the fusion of five pre trained convolutional neural networks allowing the combination of a set of deep learning classifiers to provide more accurate disease detection results.

19 citations

Journal ArticleDOI
31 Jul 2021
TL;DR: In this article, the use of meta-learning was used to determine, a priori, which classifier would be the ideal for a specific dataset, and the results obtained show numerically and statistically that there are reliable classifiers to suggest medical diagnoses.
Abstract: Machine learning in the medical area has become a very important requirement. The healthcare professional needs useful tools to diagnose medical illnesses. Classifiers are important to provide tools that can be useful to the health professional for this purpose. However, questions arise: which classifier to use? What metrics are appropriate to measure the performance of the classifier? How to determine a good distribution of the data so that the classifier does not bias the medical patterns to be classified in a particular class? Then most important question: does a classifier perform well for a particular disease? This paper will present some answers to the questions mentioned above, making use of classification algorithms widely used in machine learning research with datasets relating to medical illnesses under the supervised learning scheme. In addition to state-of-the-art algorithms in pattern classification, we introduce a novelty: the use of meta-learning to determine, a priori, which classifier would be the ideal for a specific dataset. The results obtained show numerically and statistically that there are reliable classifiers to suggest medical diagnoses. In addition, we provide some insights about the expected performance of classifiers for such a task.

13 citations

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper used dropout in the convolutional part of the network to detect pneumonia in chest X-ray images from retrospective cohorts of pediatric patients from Guangzhou Women and Children's Medical Center, Guangzhou, China.

12 citations

References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings ArticleDOI
Jia Deng1, Wei Dong1, Richard Socher1, Li-Jia Li1, Kai Li1, Li Fei-Fei1 
20 Jun 2009
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Abstract: The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.

49,639 citations

Journal ArticleDOI
TL;DR: The epidemiological, clinical, laboratory, and radiological characteristics and treatment and clinical outcomes of patients with laboratory-confirmed 2019-nCoV infection in Wuhan, China, were reported.

36,578 citations

Proceedings ArticleDOI
21 Jul 2017
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract: Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

27,821 citations