scispace - formally typeset
Search or ask a question
Author

Okeke Stephen

Bio: Okeke Stephen is an academic researcher from Dongseo University. The author has contributed to research in topics: Deep learning & Convolutional neural network. The author has an hindex of 2, co-authored 6 publications receiving 180 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: It is difficult to obtain a large amount of pneumonia dataset for this classification task, so several data augmentation algorithms were deployed to improve the validation and classification accuracy of the CNN model and achieved remarkable validation accuracy.
Abstract: This study proposes a convolutional neural network model trained from scratch to classify and detect the presence of pneumonia from a collection of chest X-ray image samples. Unlike other methods that rely solely on transfer learning approaches or traditional handcrafted techniques to achieve a remarkable classification performance, we constructed a convolutional neural network model from scratch to extract features from a given chest X-ray image and classify it to determine if a person is infected with pneumonia. This model could help mitigate the reliability and interpretability challenges often faced when dealing with medical imagery. Unlike other deep learning classification tasks with sufficient image repository, it is difficult to obtain a large amount of pneumonia dataset for this classification task; therefore, we deployed several data augmentation algorithms to improve the validation and classification accuracy of the CNN model and achieved remarkable validation accuracy.

358 citations

Proceedings ArticleDOI
01 Feb 2019
TL;DR: To validate the classification ability of the network model, the method was deployed to extract features from a given set of disjointed data with diverse convolutional chunks in a single network and obtained remarkable classification results of 98 and 95 on the fashion and color sets respectively.
Abstract: An improved multi-loss multi-output convolutional neural network method was deployed to extract features from a given set of disjointed data (Fashion and Color) with diverse convolutional chunks in a single network. The first convolution block extracts features from the first image dataset (Fashion) and determines the classes to which they belong. The second block is responsible for learning the information encoded in the second set of data (color), classify and append such to the features extracted from the first convolutional block. Each block possesses its loss function which makes the network a multi-loss convolutional neural network. A set of double fully connected output heads are generated at the network terminal; enabling the network to perform predictions on a combination of disjointed labels. To validate the classification ability of our network model, we conducted several experiments with different network parameters and variations of data sizes and obtained remarkable classification results of 98 and 95 on the fashion and color sets respectively.

9 citations

Proceedings ArticleDOI
06 Jul 2019
TL;DR: This work introduces user friendly image-to-speech tracking system for easy navigation and valuable information extraction especially for the visually impaired people with multiple language capabilities.
Abstract: We introduce a technique for real time deep learning based image detection with multilingual neural text-to-speech (TTS) synthesis; to generate different voices from a single model. In this work, we show improvement to the existing single lingual approach for a single-model based neural text to speech synthesis. This model, constructed with higher performance building block of a neural network (Inception4 model), demonstrated high level significant audio signal quality improvement on the images detected in real life. We show that a single deep learning model with a single neural TTS system can generate multiple languages with unique voices and display them in real life environment. We adopted transfer learning method for the image detection and recognition purpose and retrained the top layer of the model. This work introduces user friendly image-to-speech tracking system for easy navigation and valuable information extraction especially for the visually impaired people with multiple language capabilities.

3 citations

01 Sep 2019
TL;DR: A deep convolutional neural network approach to detect and track children-in-danger in real time using six classes which include bullying, crying, accident, sleeping, risk (handling sharp objects) and fighting scenarios.
Abstract: we propose a deep convolutional neural network approach; to detect and track children-in-danger in real time. Our model consists of six classes which include bullying, crying, accident, sleeping, risk (handling sharp objects) and fighting scenarios. We trained our model to extracted features which serves as a strong basis for children’s dangerous activity detection and recognition with potentials for deployment to perform tracking tasks. Our approach works by matching the extracted features of a detected child in an image frame and creating the corresponding bounding box across the detected class (danger). The developed framework is capable of detecting, classifying and tracking when a child is being bullied, at risk of harm from sharp objects, occurrence of accident, fighting etc. We made use of a total of 301 images and applies transfer learning technique on them due to insufficient dataset. The model displayed remarkable performance and is capable of performing detection and subsequent tracking of children at risk in real life scene.

2 citations

Journal ArticleDOI
01 Oct 2022-Sensors
TL;DR: This work proposes an ensemble deep-leaning architectural framework based on a deep learning model architectural voting policy to compute and learn the hierarchical and high-level features in industrial artefacts to improve industrial products’ production fault recognition and classification.
Abstract: Manual or traditional industrial product inspection and defect-recognition models have some limitations, including process complexity, time-consuming, error-prone, and expensiveness. These issues negatively impact the quality control processes. Therefore, an efficient, rapid, and intelligent model is required to improve industrial products’ production fault recognition and classification for optimal visual inspections and quality control. However, intelligent models obtained with a tradeoff of high accuracy for high latency are tedious for real-time implementation and inferencing. This work proposes an ensemble deep-leaning architectural framework based on a deep learning model architectural voting policy to compute and learn the hierarchical and high-level features in industrial artefacts. The voting policy is formulated with respect to three crucial viable model characteristics: model optimality, efficiency, and performance accuracy. In the study, three publicly available industrial produce datasets were used for the proposed model’s various experiments and validation process, with remarkable results recorded, demonstrating a significant increase in fault recognition and classification performance in industrial products. In the study, three publicly available industrial produce datasets were used for the proposed model’s various experiments and validation process, with remarkable results recorded, demonstrating a significant increase in fault recognition and classification performance in industrial products.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: With the proposed approach in this study, it is evident that the model can efficiently contribute to the detection of COVID-19 disease.

427 citations

Journal ArticleDOI
01 Apr 2020-Symmetry
TL;DR: The main idea is to collect all the possible images for COVID-19 that exists until the writing of this research and use the GAN network to generate more images to help in the detection of this virus from the available X-rays images with the highest accuracy possible.
Abstract: The coronavirus (COVID-19) pandemic is putting healthcare systems across the world under unprecedented and increasing pressure according to the World Health Organization (WHO). With the advances in computer algorithms and especially Artificial Intelligence, the detection of this type of virus in the early stages will help in fast recovery and help in releasing the pressure off healthcare systems. In this paper, a GAN with deep transfer learning for coronavirus detection in chest X-ray images is presented. The lack of datasets for COVID-19 especially in chest X-rays images is the main motivation of this scientific study. The main idea is to collect all the possible images for COVID-19 that exists until the writing of this research and use the GAN network to generate more images to help in the detection of this virus from the available X-rays images with the highest accuracy possible. The dataset used in this research was collected from different sources and it is available for researchers to download and use it. The number of images in the collected dataset is 307 images for four different types of classes. The classes are the COVID-19, normal, pneumonia bacterial, and pneumonia virus. Three deep transfer models are selected in this research for investigation. The models are the Alexnet, Googlenet, and Restnet18. Those models are selected for investigation through this research as it contains a small number of layers on their architectures, this will result in reducing the complexity, the consumed memory and the execution time for the proposed model. Three case scenarios are tested through the paper, the first scenario includes four classes from the dataset, while the second scenario includes 3 classes and the third scenario includes two classes. All the scenarios include the COVID-19 class as it is the main target of this research to be detected. In the first scenario, the Googlenet is selected to be the main deep transfer model as it achieves 80.6% in testing accuracy. In the second scenario, the Alexnet is selected to be the main deep transfer model as it achieves 85.2% in testing accuracy, while in the third scenario which includes two classes (COVID-19, and normal), Googlenet is selected to be the main deep transfer model as it achieves 100% in testing accuracy and 99.9% in the validation accuracy. All the performance measurement strengthens the obtained results through the research.

391 citations

Journal ArticleDOI
TL;DR: A novel artificial neural network, Convolutional CapsNet for the detection of COVID-19 disease is proposed by using chest X-ray images with capsule networks to provide fast and accurate diagnostics for CO VID-19 diseases with binary classification, and multi-class classification.
Abstract: Coronavirus is an epidemic that spreads very quickly. For this reason, it has very devastating effects in many areas worldwide. It is vital to detect COVID-19 diseases as quickly as possible to restrain the spread of the disease. The similarity of COVID-19 disease with other lung infections makes the diagnosis difficult. In addition, the high spreading rate of COVID-19 increased the need for a fast system for the diagnosis of cases. For this purpose, interest in various computer-aided (such as CNN, DNN, etc.) deep learning models has been increased. In these models, mostly radiology images are applied to determine the positive cases. Recent studies show that, radiological images contain important information in the detection of coronavirus. In this study, a novel artificial neural network, Convolutional CapsNet for the detection of COVID-19 disease is proposed by using chest X-ray images with capsule networks. The proposed approach is designed to provide fast and accurate diagnostics for COVID-19 diseases with binary classification (COVID-19, and No-Findings), and multi-class classification (COVID-19, and No-Findings, and Pneumonia). The proposed method achieved an accuracy of 97.24%, and 84.22% for binary class, and multi-class, respectively. It is thought that the proposed method may help physicians to diagnose COVID-19 disease and increase the diagnostic performance. In addition, we believe that the proposed method may be an alternative method to diagnose COVID-19 by providing fast screening.

244 citations

Journal ArticleDOI
TL;DR: A novel attention-based deep learning model using the attention module with VGG-16 that captures the spatial relationship between the ROIs in CXR images and indicates that it is suitable for CxR image classification in COVID-19 diagnosis.
Abstract: Computer-aided diagnosis (CAD) methods such as Chest X-rays (CXR)-based method is one of the cheapest alternative options to diagnose the early stage of COVID-19 disease compared to other alternatives such as Polymerase Chain Reaction (PCR), Computed Tomography (CT) scan, and so on. To this end, there have been few works proposed to diagnose COVID-19 by using CXR-based methods. However, they have limited performance as they ignore the spatial relationships between the region of interests (ROIs) in CXR images, which could identify the likely regions of COVID-19’s effect in the human lungs. In this paper, we propose a novel attention-based deep learning model using the attention module with VGG-16. By using the attention module, we capture the spatial relationship between the ROIs in CXR images. In the meantime, by using an appropriate convolution layer (4th pooling layer) of the VGG-16 model in addition to the attention module, we design a novel deep learning model to perform fine-tuning in the classification process. To evaluate the performance of our method, we conduct extensive experiments by using three COVID-19 CXR image datasets. The experiment and analysis demonstrate the stable and promising performance of our proposed method compared to the state-of-the-art methods. The promising classification performance of our proposed method indicates that it is suitable for CXR image classification in COVID-19 diagnosis.

161 citations

Journal ArticleDOI
19 Jun 2020
TL;DR: A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way and is able to outperform all the individual models.
Abstract: Pneumonia causes the death of around 700,000 children every year and affects 7% of the global population Chest X-rays are primarily used for the diagnosis of this disease However, even for a trained radiologist, it is a challenging task to examine chest X-rays There is a need to improve the diagnosis accuracy In this work, an efficient model for the detection of pneumonia trained on digital chest X-ray images is proposed, which could aid the radiologists in their decision making process A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way This approach is a supervised learning approach in which the network predicts the result based on the quality of the dataset used Transfer learning is used to fine-tune the deep learning models to obtain higher training and validation accuracy Partial data augmentation techniques are employed to increase the training dataset in a balanced way The proposed weighted classifier is able to outperform all the individual models Finally, the model is evaluated, not only in terms of test accuracy, but also in the AUC score The final proposed weighted classifier model is able to achieve a test accuracy of 9843% and an AUC score of 9976 on the unseen data from the Guangzhou Women and Children’s Medical Center pneumonia dataset Hence, the proposed model can be used for a quick diagnosis of pneumonia and can aid the radiologists in the diagnosis process

155 citations