scispace - formally typeset
Search or ask a question

Showing papers in "Medical & Biological Engineering & Computing in 2021"


Journal ArticleDOI
TL;DR: In this paper, an encryption algorithm based on integer wavelet transform (IWT) blended with deoxyribo nucleic acid (DNA) and chaos was proposed to secure the digital medical images.
Abstract: In this growing era, a massive amount of digital electronic health records (EHRs) are transferred through the open network. EHRs are at risk of a myriad of security threats, to overcome such threats, encryption is a reliable technique to secure data. This paper addresses an encryption algorithm based on integer wavelet transform (IWT) blended with deoxyribo nucleic acid (DNA) and chaos to secure the digital medical images. The proposed work comprises of two phases, i.e. a two-stage shuffling phase and diffusion phase. The first stage of shuffling starts with initial block confusion followed by row and column shuffling of pixels as the second stage. The pixels of the shuffled image are circularly shifted bitwise at the first stage of diffusion to enhance the security of the system against differential attack. The second stage of diffusion operation is based on DNA coding and DNA XOR operations. The experimental analyses have been carried out with 100 DICOM test images of 16-bit depth to evaluate the strength of the algorithm against statistical and differential attacks. By the results, the maximum entropy has been obtained an average of 15.79, NPCR of 99.99, UACI of 33.31, and larger keyspace of 10140, which infer that our technique overwhelms various other state-of-the-art techniques.

53 citations


Journal ArticleDOI
TL;DR: In this article, a transfer learning-based COVID-19 screening technique is proposed, which uses truncated VGG16 (Visual Geometry Group from Oxford) architecture to extract features from CT scan images and principal component analysis (PCA) is used for feature selection.
Abstract: The novel discovered disease coronavirus popularly known as COVID-19 is caused due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and declared a pandemic by the World Health Organization (WHO). An early-stage detection of COVID-19 is crucial for the containment of the pandemic it has caused. In this study, a transfer learning-based COVID-19 screening technique is proposed. The motivation of this study is to design an automated system that can assist medical staff especially in areas where trained staff are outnumbered. The study investigates the potential of transfer learning-based models for automatically diagnosing diseases like COVID-19 to assist the medical force, especially in times of an outbreak. In the proposed work, a deep learning model, i.e., truncated VGG16 (Visual Geometry Group from Oxford) is implemented to screen COVID-19 CT scans. The VGG16 architecture is fine-tuned and used to extract features from CT scan images. Further principal component analysis (PCA) is used for feature selection. For the final classification, four different classifiers, namely deep convolutional neural network (DCNN), extreme learning machine (ELM), online sequential ELM, and bagging ensemble with support vector machine (SVM) are compared. The best performing classifier bagging ensemble with SVM within 385 ms achieved an accuracy of 95.7%, the precision of 95.8%, area under curve (AUC) of 0.958, and an F1 score of 95.3% on 208 test images. The results obtained on diverse datasets prove the superiority and robustness of the proposed work. A pre-processing technique has also been proposed for radiological data. The study further compares pre-trained CNN architectures and classification models against the proposed technique.

50 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the potential of NSCLC histology classification into Adenocarcinoma (AC) and SCC by applying different feature extraction and classification techniques on pre-treatment CT images.
Abstract: Adenocarcinoma (AC) and squamous cell carcinoma (SCC) are frequent reported cases of non-small cell lung cancer (NSCLC), responsible for a large fraction of cancer deaths worldwide. In this study, we aim to investigate the potential of NSCLC histology classification into AC and SCC by applying different feature extraction and classification techniques on pre-treatment CT images. The employed image dataset (102 patients) was taken from the publicly available cancer imaging archive collection (TCIA). We investigated four different families of techniques: (a) radiomics with two classifiers (kNN and SVM), (b) four state-of-the-art convolutional neural networks (CNNs) with transfer learning and fine tuning (Alexnet, ResNet101, Inceptionv3 and InceptionResnetv2), (c) a CNN combined with a long short-term memory (LSTM) network to fuse information about the spatial coherency of tumor's CT slices, and (d) combinatorial models (LSTM + CNN + radiomics). In addition, the CT images were independently evaluated by two expert radiologists. Our results showed that the best CNN was Inception (accuracy = 0.67, auc = 0.74). LSTM + Inception yielded superior performance than all other methods (accuracy = 0.74, auc = 0.78). Moreover, LSTM + Inception outperformed experts by 7-25% (p < 0.05). The proposed methodology does not require detailed segmentation of the tumor region and it may be used in conjunction with radiological findings to improve clinical decision-making. Lung cancer histology classification from CT images based on CNN + LSTM.

49 citations


Journal ArticleDOI
TL;DR: In this article, a few-shot learning (FSL) using a generative adversarial network (GAN) was proposed to improve the applicability of DL in the optical coherence tomography diagnosis of rare diseases.
Abstract: Deep learning (DL) has been successfully applied to the diagnosis of ophthalmic diseases. However, rare diseases are commonly neglected due to insufficient data. Here, we demonstrate that few-shot learning (FSL) using a generative adversarial network (GAN) can improve the applicability of DL in the optical coherence tomography (OCT) diagnosis of rare diseases. Four major classes with a large number of datasets and five rare disease classes with a few-shot dataset are included in this study. Before training the classifier, we constructed GAN models to generate pathological OCT images of each rare disease from normal OCT images. The Inception-v3 architecture was trained using an augmented training dataset, and the final model was validated using an independent test dataset. The synthetic images helped in the extraction of the characteristic features of each rare disease. The proposed DL model demonstrated a significant improvement in the accuracy of the OCT diagnosis of rare retinal diseases and outperformed the traditional DL models, Siamese network, and prototypical network. By increasing the accuracy of diagnosing rare retinal diseases through FSL, clinicians can avoid neglecting rare diseases with DL assistance, thereby reducing diagnosis delay and patient burden.

45 citations


Journal ArticleDOI
TL;DR: The availability of the proposed approach in clinical settings to support the medical decision regarding brain tumor detection is demonstrated, with promising results with a high level of accuracy, precision, and specificity.
Abstract: Brain cancer is a disease caused by the growth of abnormal aggressive cells in the brain outside of normal cells. Symptoms and diagnosis of brain cancer cases are producing more accurate results day by day in parallel with the development of technological opportunities. In this study, a deep learning model called BrainMRNet which is developed for mass detection in open-source brain magnetic resonance images was used. The BrainMRNet model includes three processing steps: attention modules, the hypercolumn technique, and residual blocks. To demonstrate the accuracy of the proposed model, three types of tumor data leading to brain cancer were examined in this study: glioma, meningioma, and pituitary. In addition, a segmentation method was proposed, which additionally determines in which lobe area of the brain the two classes of tumors that cause brain cancer are more concentrated. The classification accuracy rates were performed in the study; it was 98.18% in glioma tumor, 96.73% in meningioma tumor, and 98.18% in pituitary tumor. At the end of the experiment, using the subset of glioma and meningioma tumor images, it was determined which at brain lobe the tumor region was seen, and 100% success was achieved in the analysis of this determination. In this study, a hybrid deep learning model is presented to determine the detection of the brain tumor. In addition, open-source software was proposed, which statistically found in which lobe region of the human brain the brain tumor occurred. The methods applied and tested in the experiments have shown promising results with a high level of accuracy, precision, and specificity. These results demonstrate the availability of the proposed approach in clinical settings to support the medical decision regarding brain tumor detection.

41 citations


Journal ArticleDOI
TL;DR: In this paper, a convolutional neural network (CNN) and an improved CNN (iDCNN) were proposed for early diagnosis of Wilson's disease (WD) using 3D optimized classification.
Abstract: Wilson's disease (WD) is caused by copper accumulation in the brain and liver, and if not treated early, can lead to severe disability and death. WD has shown white matter hyperintensity (WMH) in the brain magnetic resonance scans (MRI) scans, but the diagnosis is challenging due to (i) subtle intensity changes and (ii) weak training MRI when using artificial intelligence (AI). Design and validate seven types of high-performing AI-based computer-aided design (CADx) systems consisting of 3D optimized classification, and characterization of WD against controls. We propose a "conventional deep convolution neural network" (cDCNN) and an "improved DCNN" (iDCNN) where rectified linear unit (ReLU) activation function was modified ensuring "differentiable at zero." Three-dimensional optimization was achieved by recording accuracy while changing the CNN layers and augmentation by several folds. WD was characterized using (i) CNN-based feature map strength and (ii) Bispectrum strengths of pixels having higher probabilities of WD. We further computed the (a) area under the curve (AUC), (b) diagnostic odds ratio (DOR), (c) reliability, and (d) stability and (e) benchmarking. Optimal results were achieved using 9 layers of CNN, with 4-fold augmentation. iDCNN yields superior performance compared to cDCNN with accuracy and AUC of 98.28 ± 1.55, 0.99 (p < 0.0001), and 97.19 ± 2.53%, 0.984 (p < 0.0001), respectively. DOR of iDCNN outperformed cDCNN fourfold. iDCNN also outperformed (a) transfer learning-based "Inception V3" paradigm by 11.92% and (b) four types of "conventional machine learning-based systems": k-NN, decision tree, support vector machine, and random forest by 55.13%, 28.36%, 15.35%, and 14.11%, respectively. The AI-based systems can potentially be useful in the early WD diagnosis. Graphical Abstract.

31 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors explored two potential training strategies to address the overfitting problem in atrial fibrillation (AF) detection in electrocardiogram (ECG) signals.
Abstract: Nowadays, deep learning-based models have been widely developed for atrial fibrillation (AF) detection in electrocardiogram (ECG) signals. However, owing to the inevitable over-fitting problem, classification accuracy of the developed models severely differed when applying on the independent test datasets. This situation is more significant for AF detection from dynamic ECGs. In this study, we explored two potential training strategies to address the over-fitting problem in AF detection. The first one is to use the Fast Fourier transform (FFT) and Hanning-window-based filter to suppress the influence from individual difference. Another is to train the model on the wearable ECG data to improve the robustness of model. Wearable ECG data from 29 patients with arrhythmia were collected for at least 24 h. To verify the effectiveness of the training strategies, a Long Short-Term Memory (LSTM) and Convolution Neural Network (CNN)-based model was proposed and tested. We tested the model on the independent wearable ECG data set, as well as the MIT-BIH Atrial Fibrillation database and PhysioNet/Computing in Cardiology Challenge 2017 database. The model achieved 96.23%, 95.44%, and 95.28% accuracy rates on the three databases, respectively. Pertaining to the comparison of the accuracy rates on each training set, the accuracy of the model trained in conjunction with the proposed training strategies only reduced by 2%, while the accuracy of the model trained without the training strategies decreased by approximately 15%. Therefore, the proposed training strategies serve as effective mechanisms for devising a robust AF detector and significantly enhanced the detection accuracy rates of the resulting deep networks.

25 citations


Journal ArticleDOI
TL;DR: In this paper, a nonlocal self-similarity evaluation with the tight frame is exploited to improve the patch matching and to remove the Rician noise and preserve the edge details, the extended difference of Gaussian (DoG) filter was exploited to the non-local low-rank regularization model.
Abstract: The low-rank matrix approximation (LRMA) is an efficient image denoising method to reduce additive Gaussian noise. However, the existing low-rank matrix approximation does not perform well in terms of Rician noise removal for magnetic resonance imaging (MRI). To this end, we propose a novel MR image denoising approach based on the extended difference of Gaussian (DoG) filter and nonlocal low-rank regularization. In the proposed method, a novel nonlocal self-similarity evaluation with the tight frame is exploited to improve the patch matching. To remove the Rician noise and preserve the edge details, the extended DoG filter is exploited to the nonlocal low-rank regularization model. The experimental results demonstrate that the proposed method can preserve more edge and fine structures while removing noise in MR image as compared with some of the existing methods.

23 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new preprocessing technique to extract the region of interest (RoI) of skin lesion dataset and compared the performance of the most state-of-the-art CNN classifiers with two datasets which contain raw and RoI extracted images.
Abstract: Skin lesion is one of the severe diseases which in many cases endanger the lives of patients on a worldwide extent. Early detection of disease in dermoscopy images can significantly increase the survival rate. However, the accurate detection of disease is highly challenging due to the following reasons: e.g., visual similarity between different classes of disease (e.g., melanoma and non-melanoma lesions), low contrast between lesions and skin, background noise, and artifacts. Machine learning models based on convolutional neural networks (CNN) have been widely used for automatic recognition of lesion diseases with high accuracy in comparison to conventional machine learning methods. In this research, we proposed a new preprocessing technique in order to extract the region of interest (RoI) of skin lesion dataset. We compare the performance of the most state-of-the-art CNN classifiers with two datasets which contain (1) raw, and (2) RoI extracted images. Our experiment results show that training CNN models by RoI extracted dataset can improve the accuracy of the prediction (e.g., InceptionResNetV2, 2.18% improvement). Moreover, it significantly decreases the evaluation (inference) and training time of classifiers as well.

21 citations


Journal ArticleDOI
TL;DR: The neurophysiological biomarkers obtained by non-invasive and portable technique as wireless EEG in the early pre-treatment phase may contribute as objective parameters to the short/long-term outcome prediction pivotal to better establish the treatment strategies.
Abstract: Owing to the large inter-subject variability, early post-stroke prognosis is challenging, and objective biomarkers that can provide further prognostic information are still needed. The relation between quantitative EEG parameters in pre-thrombolysis hyper-acute phase and outcomes has still to be investigated. Hence, possible correlations between early EEG biomarkers, measured on bedside wireless EEG, and short-term/long-term functional and morphological outcomes were investigated in thrombolysis-treated strokes. EEG with a wireless device was performed in 20 patients with hyper-acute (< 4.5 h from onset) anterior ischemic stroke before reperfusion treatment. The correlations between outcome parameters (i.e., 7-day/12-month National Institutes of Health Stroke Scale NIHSS, 12-month modified Rankin Scale mRS, final infarct volume) and the pre-treatment EEG parameters were studied. Relative delta power and alpha power, delta/alpha (DAR), and (delta+theta)/(alpha+beta) (DTABR) ratios significantly correlated with NIHSS 7-day (rho = 0.80, − 0.81, 0.76, 0.75, respectively) and NIHSS 12-month (0.73, − 0.78, 0.74, 0.73, respectively), as well as with final infarct volume (0.75, − 0.70, 0.78, 0.62, respectively). A good outcome in terms of mRS ≤ 2 at 12 months was associated with DAR parameter (p = 0.008). The neurophysiological biomarkers obtained by non-invasive and portable technique as wireless EEG in the early pre-treatment phase may contribute as objective parameters to the short/long-term outcome prediction pivotal to better establish the treatment strategies. Graphical abstract

20 citations


Journal ArticleDOI
Yanzheng Lu1, Hong Wang1, Fo Hu1, Bin Zhou1, Hailong Xi1 
TL;DR: In this article, a method of information fusion for sensors including sEMG, IMU, and footswitch sensor is studied to recognize the human jump phase, which is crucial to the development of exoskeleton that assists jumping.
Abstract: Jump locomotion is the basic movement of human. However, no thorough research on the recognition of jump sub-phases has been carried so far. This paper aims to use multi-sensor information fusion and machine learning to recognize the human jump phase, which is crucial to the development of exoskeleton that assists jumping. The method of information fusion for sensors including sEMG, IMU, and footswitch sensor is studied. The footswitch signals are filtered by median filter. A processing method of synthesizing Euler angles into phase angle is proposed, which is beneficial to data integration. The jump locomotion is creatively segmented into five phases. The onset and offset of active segment are detected by sample entropy of sEMG and standard deviation of acceleration signal. The features are extracted from analysis windows using multi-sensor information fusion, and the dimension of feature matrix is selected. By comparing the performances of state-of-the-art machine learning classifiers, feature subsets of sEMG, IMU, and footswitch signals are selected from time domain features in a series of analysis window parameters. The average recognition accuracy of sEMG and IMU is 91.76% and 97.68%, respectively. When using the combination of sEMG, IMU, and footswitch signals, the average accuracy is 98.70%, which outperforms the combination of sEMG and IMU (97.97%, p < 0.01).

Journal ArticleDOI
TL;DR: A hybrid approach based on adaptive neuro-fuzzy inference system (ANFIS), the fuzzy c-means clustering (FCM), and the simulated annealing (SA) algorithm is proposed in this article.
Abstract: In the medical field, successful classification of microarray gene expression data is of major importance for cancer diagnosis. However, due to the profusion of genes number, the performance of classifying DNA microarray gene expression data using statistical algorithms is often limited. Recently, there has been an important increase in the studies on the utilization of artificial intelligence methods, for the purpose of classifying large-scale data. In this context, a hybrid approach based on the adaptive neuro-fuzzy inference system (ANFIS), the fuzzy c-means clustering (FCM), and the simulated annealing (SA) algorithm is proposed in this study. The proposed method is applied to classify five different cancer datasets (i.e., lung cancer, central nervous system cancer, brain cancer, endometrial cancer, and prostate cancer). The backpropagation algorithm, hybrid algorithm, genetic algorithm, and the other statistical methods such as Bayesian network, support vector machine, and J48 decision tree are used to compare the proposed approach's performance to other algorithms. The results show that the performance of training FCM-based ANFIS using SA algorithm for classifying all the cancer datasets becomes more successful with the average accuracy rate of 96.28% and the results of the other methods are also satisfactory. The proposed method gives more effective results than the others for classifying DNA microarray cancer gene expression data. Basic structure of proposed method.

Journal ArticleDOI
Naichao Wu1, Shan Li1, Boyan Zhang1, Chenyu Wang1, Bingpeng Chen1, Qing Han1, Jincheng Wang1 
TL;DR: Wang et al. as mentioned in this paper focused on the current stage of TO technique with respect to the global layout and hierarchical structure in orthopedic implants and discussed the characteristics of implants, methods of TO, validation methods of newly designed implants, and limitations of current research have been summarized.
Abstract: Metal implants are widely used in the treatment of orthopedic diseases. However, owing to the mismatched elastic modulus of the bone and implants, stress shielding often occurs clinically which can result in failure of the implant or fractures around the implant. Topology optimization (TO) is a technique that can provide more efficient material distribution according to the objective function under the special load and boundary conditions. Several researchers have paid close attention to TO for optimal design of orthopedic implants. Thanks to the development of additive manufacturing (AM), the complex structure of the TO design can be fabricated. This article mainly focuses on the current stage of TO technique with respect to the global layout and hierarchical structure in orthopedic implants. In each aspect, diverse implants in different orthopedic fields related to TO design are discussed. The characteristics of implants, methods of TO, validation methods of the newly designed implants, and limitations of current research have been summarized. The review concludes with future challenges and directions for research. Wang TO design of global layout and local structure of implants in diverse fields of orthopedic.

Journal ArticleDOI
TL;DR: In this article, the authors applied deep learning with convolutional neural networks (CNN) for the classification of solitary pulmonary nodules (SPNs) in CT scans extracted from Positron Emission Tomography and Computer Tomography (PET/CT) system.
Abstract: Early and automatic diagnosis of Solitary Pulmonary Nodules (SPN) in Computed Tomography (CT) chest scans can provide early treatment for patients with lung cancer, as well as doctor liberation from time-consuming procedures. The purpose of this study is the automatic and reliable characterization of SPNs in CT scans extracted from Positron Emission Tomography and Computer Tomography (PET/CT) system. To achieve the aforementioned task, Deep Learning with Convolutional Neural Networks (CNN) is applied. The strategy of training specific CNN architectures from scratch and the strategy of transfer learning, by utilizing state-of-the-art pre-trained CNNs, are compared and evaluated. To enhance the training sets, data augmentation is performed. The publicly available database of CT scans, named as Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), is also utilized to further expand the training set and is added to the PET/CT dataset. The results highlight the effectiveness of transfer learning and data augmentation for the classification task of small datasets. The best accuracy obtained on the PET/CT dataset reached 94%, utilizing a modification proposal of a state-of-the-art CNN, called VGG16, and enhancing the training set with LIDC-IDRI dataset. Besides, the proposed modification outperforms in terms of sensitivity several similar researches, which exploit the benefits of transfer learning. Overview of the experiment setup. The two datasets containing nodule representations are combined to evaluate the effectiveness of transfer learning over the traditional approach of training Convolutional Neural Networks from scratch.

Journal ArticleDOI
TL;DR: In this article, the LBP descriptor has been implemented on an FPGA using Xilinx VIVADO 16.4 for depression level detection using facial expressions of the patient.
Abstract: The psychological health of a person plays an important role in their daily life activities. The paper addresses depression issues with the machine learning model using facial expressions of the patient. Some research has already been done on visual based on depression detection methods, but those are illumination variant. The paper uses feature extraction using LBP (Local Binary Pattern) descriptor, which is illumination invariant. The Viola-Jones algorithm is used for face detection and SVM (support vector machine) is considered for classification along with the LBP descriptor to make a complete model for depression level detection. The proposed method captures frontal face from the videos of subjects and their facial features are extracted from each frame. Subsequently, the facial features are analyzed to detect depression levels with the post-processing model. The performance of the proposed system is evaluated using machine learning algorithms in MATLAB. For the real-time system design, it is necessary to test it on the hardware platform. The LBP descriptor has been implemented on FPGA using Xilinx VIVADO 16.4. The results of the proposed method show satisfactory performance and accuracy for depression detection comparison with similar previous work.

Journal ArticleDOI
TL;DR: In this paper, a multidimensional MI-EEG imaging method is proposed, which is based on time-frequency analysis and the Clough-Tocher (CT) interpolation algorithm.
Abstract: A motor imagery EEG (MI-EEG) signal is often selected as the driving signal in an active brain computer interface (BCI) system, and it has been a popular field to recognize MI-EEG images via convolutional neural network (CNN), which poses a potential problem for maintaining the integrity of the time-frequency-space information in MI-EEG images and exploring the feature fusion mechanism in the CNN. However, information is excessively compressed in the present MI-EEG image, and the sequential CNN is unfavorable for the comprehensive utilization of local features. In this paper, a multidimensional MI-EEG imaging method is proposed, which is based on time-frequency analysis and the Clough-Tocher (CT) interpolation algorithm. The time-frequency matrix of each electrode is generated via continuous wavelet transform (WT), and the relevant section of frequency is extracted and divided into nine submatrices, the longitudinal sums and lengths of which are calculated along the directions of frequency and time successively to produce a 3 × 3 feature matrix for each electrode. Then, feature matrix of each electrode is interpolated to coincide with their corresponding coordinates, thereby yielding a WT-based multidimensional image, called WTMI. Meanwhile, a multilevel and multiscale feature fusion convolutional neural network (MLMSFFCNN) is designed for WTMI, which has dense information, low signal-to-noise ratio, and strong spatial distribution. Extensive experiments are conducted on the BCI Competition IV 2a and 2b datasets, and accuracies of 92.95% and 97.03% are yielded based on 10-fold cross-validation, respectively, which exceed those of the state-of-the-art imaging methods. The kappa values and p values demonstrate that our method has lower class skew and error costs. The experimental results demonstrate that WTMI can fully represent the time-frequency-space features of MI-EEG and that MLMSFFCNN is beneficial for improving the collection of multiscale features and the fusion recognition of general and abstract features for WTMI.

Journal ArticleDOI
TL;DR: The proposed model is based on distinguishing 9238 sequences using three stages, including data preprocessing, data labeling, and classification and will act as a prediction tool for the COVID-19 protein sequences in different countries.
Abstract: The rapid spread of coronavirus disease (COVID-19) has become a worldwide pandemic and affected more than 15 million patients reported in 27 countries. Therefore, the computational biology carrying this virus that correlates with the human population urgently needs to be understood. In this paper, the classification of the human protein sequences of COVID-19, according to the country, is presented based on machine learning algorithms. The proposed model is based on distinguishing 9238 sequences using three stages, including data preprocessing, data labeling, and classification. In the first stage, data preprocessing's function converts the amino acids of COVID-19 protein sequences into eight groups of numbers based on the amino acids' volume and dipole. It is based on the conjoint triad (CT) method. In the second stage, there are two methods for labeling data from 27 countries from 0 to 26. The first method is based on selecting one number for each country according to the code numbers of countries, while the second method is based on binary elements for each country. According to their countries, machine learning algorithms are used to discover different COVID-19 protein sequences in the last stage. The obtained results demonstrate 100% accuracy, 100% sensitivity, and 90% specificity via the country-based binary labeling method with a linear support vector machine (SVM) classifier. Furthermore, with significant infection data, the USA is more prone to correct classification compared to other countries with fewer data. The unbalanced data for COVID-19 protein sequences is considered a major issue, especially as the US's available data represents 76% of a total of 9238 sequences. The proposed model will act as a prediction tool for the COVID-19 protein sequences in different countries.

Journal ArticleDOI
TL;DR: In this article, the classification accuracy and sensitivity of low-resolution plantar pressure measurements in distinguishing workplace postures were evaluated using machine learning algorithms: support vector machines (SVMs), decision tree (DT), discriminant analysis (DA), and k-nearest neighbors (KNN).
Abstract: Prolonged static weight-bearing at work may increase the risk of developing plantar fasciitis (PF). However, to establish a causal relationship between weight-bearing and PF, a low-cost objective measure of workplace behaviors is needed. This proof-of-concept study assesses the classification accuracy and sensitivity of low-resolution plantar pressure measurements in distinguishing workplace postures. Plantar pressure was measured using an in-shoe measurement system in eight healthy participants while sitting, standing, and walking. Data was resampled to simulate on/off characteristics of 24 plantar force sensitive resistors. The top 10 sensors were evaluated using leave-one-out cross-validation with machine learning algorithms: support vector machines (SVMs), decision tree (DT), discriminant analysis (DA), and k-nearest neighbors (KNN). SVM and DT best classified sitting, standing, and walking. High classification accuracy was obtained with five sensors (98.6% and 99.1% accuracy, respectively) and even a single sensor (98.4% and 98.4%, respectively). The central forefoot and the medial and lateral midfoot were the most important classification sensor locations. On/off plantar pressure measurements in the midfoot and central forefoot can accurately classify workplace postures. These results provide the foundation for a low-cost objective tool to classify and quantify sedentary workplace postures.

Journal ArticleDOI
TL;DR: In this paper, a novel watermarking algorithm is designed based on Integer Wavelet Transform (IWT), combined chaotic map, recovery bit generation and SHA-256 to address the objective as mentioned earlier.
Abstract: Smart healthcare systems play a vital role in the current era of Internet of Things (IoT) and Cyber-Physical Systems (CPS); i.e. Industry 4.0. Medical data security has become the integral part of smart hospital applications to ensure data privacy and patient data security. Usually, patient medical reports and diagnostic images are transferred to the specialist physician in other hospitals for effective diagnostics. Therefore, the transmission of medical data over the internet has attained significant interest among many researchers. The three main challenges associated with the e-healthcare systems are the following: (1) ensuring authentication of medical information; (2) transmission of medical image and patient health record (PHR) should not cause data mismatch/detachment; and (3) medical image should not be modified accidentally or intentionally as they are transmitted over the insecure medium. Thus, it is highly essential to ensure the integrity of the medical image, especially the region of interest (ROI) before taking any diagnostic decisions. Watermarking is a well-known technique used to overcome these challenges. The current research work has developed a watermarking algorithm to ensure integrity and authentication of the medical data and image. In this paper, a novel watermarking algorithm is designed based on Integer Wavelet Transform (IWT), combined chaotic map, recovery bit generation and SHA-256 to address the objective as mentioned earlier. The paper’s significant contribution is divided into four phases, namely, watermark generation and data embedding phase, authentication phase, tamper detection phase and localisation and lossless recovery phase. Experiments are carried out to prove that the developed IWT-based data embedding scheme offers high robustness to the data embedded in region of non-interest (RONI), detects and localises the tampered blocks inside ROI with 100% accuracy and recovers the tampered segments of ROI with zero MSE. Further, a comparison is made with the state-of-art schemes to verify the sternness of the developed system.

Journal ArticleDOI
TL;DR: In this article, a support vector machine-based black widow optimization (SVM-BWO) method was proposed for skin disease classification, which used fuzzy set segmentation to segment the skin lesion region and extracted color, gray-level co-occurrence matrix texture and shape features for further process.
Abstract: The skin, which has seven layers, is the main human organ and external barrier. According to the World Health Organization (WHO), skin cancer is the fourth leading cause of non-fatal disease risk. In medicinal fields, skin disease classification is a major challenging issue due to inaccurate outputs, overfitting, larger computational cost, and so on. We presented a novel approach of support vector machine–based black widow optimization (SVM-BWO) for skin disease classification. Five different kinds of skin disease images are taken such as psoriasis, paederus, herpes, melanoma, and benign with healthy images which are chosen for this work. The pre-processing step is handled to remove the noises from the original input images. Thereafter, the novel fuzzy set segmentation algorithm subsequently segments the skin lesion region. From this, the color, gray-level co-occurrence matrix texture, and shape features are extracted for further process. Skin disease is classified with the usage of the SVM-BWO algorithm. The implementation works are handled in MATLAB-2018a, thereby the dataset images were collected from ISIC-2018 datasets. Experimentally, various kinds of performance analyses with state-of-the-art techniques are performed. Anyway, the proposed methodology outperforms better classification accuracy of 92% than other methods.

Journal ArticleDOI
TL;DR: In this paper, a deep convolutional neural network (DCNN) was proposed for the recognition of visual/mental imagination of English alphabets so as to enable typing directly via brain signals.
Abstract: Electroencephalography (EEG)-based brain computer interface (BCI) enables people to interact directly with computing devices through their brain signals. A BCI typically interprets EEG signals to reflect the user’s intent or other mental activity. Motor imagery (MI) is a commonly used technique in BCIs where a user is asked to imagine moving certain part of the body such as a hand or a foot. By correctly interpreting the signal, one can perform a multitude of tasks such as controlling wheel chair, playing computer games, or even typing text. However, the use of motor-imagery-based BCIs outside the laboratory environment is limited due to the lack of their reliability. This work focuses on another kind of mental imagery, namely, the visual imagery (VI). VI is the manipulation of visual information that comes from memory. This work presents a deep convolutional neural network (DCNN)–based system for the recognition of visual/mental imagination of English alphabets so as to enable typing directly via brain signals. The DCNN learns to extract the spatial features hidden in the EEG signal. As opposed to many deep neural networks that use raw EEG signals for classification, this work transforms the raw signals into band powers using Morlet wavelet transformation. The proposed approach is evaluated on two publicly available benchmark MI-EEG datasets and a visual imagery dataset specifically collected for this work. The obtained results demonstrate that the proposed model performs better than the existing state-of-the-art methods for MI-EEG classification and yields an average accuracy of 99.45% on the two public MI-EEG datasets. The model also achieves an average recognition rate of 95.2% for the 26 English-language alphabets. Overall working of the proposed solution for imagined character recognition through EEG signals

Journal ArticleDOI
TL;DR: In this article, the authors proposed an automatic breast cancer classification system that uses support vector machine (SVM) classifier based on integrated features (texture, geometrical, and color).
Abstract: Breast cancer is one among the most frequent reasons of women's death worldwide. Nowadays, healthcare informatics is mainly focussing on the classification of breast cancer images, due to the lethal nature of this cancer. There are chances of inter- and intra-observer variability that may lead to misdiagnosis in the detection of cancer. This study proposed an automatic breast cancer classification system that uses support vector machine (SVM) classifier based on integrated features (texture, geometrical, and color). The University of California Santa Barbara (UCSB) dataset and BreakHis dataset, which are available in public domain, were used. A classification comparison module which involves SVM, k-nearest neighbor (k-NN), random forest (RF), and artificial neural network (ANN) was also proposed to determine the classifier that best suits for the application of breast cancer detection from histopathology images. The performance of these classifiers was analyzed against metrics like accuracy, specificity, sensitivity, balanced accuracy, and F-score. Results showed that among the classifiers, the SVM classifier performed better with a test accuracy of approximately 90% on both the datasets. Additionally, the significance of the proposed integrated SVM model was statistically analyzed against other classifier models.

Journal ArticleDOI
TL;DR: In this article, a new approach for detecting Alzheimer's disease and potentially mild cognitive impairment according to the measured EEG records is presented, which evaluates the amount of novelty in the EEG signal as a feature for EEG record classification.
Abstract: Alzheimer's disease is diagnosed via means of daily activity assessment. The EEG recording evaluation is a supporting tool that can assist the practitioner to recognize the illness, especially in the early stages. This paper presents a new approach for detecting Alzheimer's disease and potentially mild cognitive impairment according to the measured EEG records. The proposed method evaluates the amount of novelty in the EEG signal as a feature for EEG record classification. The novelty is measured from the parameters of EEG signal adaptive filtration. A linear neuron with gradient descent adaptation was used as the filter in predictive settings. The extracted feature (novelty measure) is later classified to obtain Alzheimer's disease diagnosis. The proposed approach was cross-validated on a dataset containing EEG records of 59 patients suffering from Alzheimer's disease; seven patients with mild cognitive impairment (MCI) and 102 controls. The results of cross-validation yield 90.73% specificity and 89.51% sensitivity. The proposed method of feature extraction from EEG is completely new and can be used with any classifier for the diagnosis of Alzheimer's disease from EEG records.

Journal ArticleDOI
TL;DR: In this paper, a deep learning-based automatic classification and quantitative analysis of blood cells are proposed using the YOLOv2 model, which achieved an average accuracy of 80.6% and a precision of 88.4%.
Abstract: Blood cell count provides relevant clinical information about different kinds of disorders. Any deviation in the number of blood cells implies the presence of infection, inflammation, edema, bleeding, and other blood-related issues. Current microscopic methods used for blood cell counting are very tedious and are highly prone to different sources of errors. Besides, these techniques do not provide full information related to blood cells like shape and size, which play important roles in the clinical investigation of serious blood-related diseases. In this paper, deep learning-based automatic classification and quantitative analysis of blood cells are proposed using the YOLOv2 model. The model was trained on 1560 images and 2703-labeled blood cells with different hyper-parameters. It was tested on 26 images containing 1454 red blood cells, 159 platelets, 3 basophils, 12 eosinophils, 24 lymphocytes, 13 monocytes, and 28 neutrophils. The network achieved detection and segmentation of blood cells with an average accuracy of 80.6% and a precision of 88.4%. Quantitative analysis of cells was done following classification, and mean accuracy of 92.96%, 91.96%, 88.736%, and 92.7% has been achieved in the measurement of area, aspect ratio, diameter, and counting of cells respectively.Graphical abstract Graphical abstract where the first picture shows the input image of blood cells seen under a compound light microscope. The second image shows the tools used like OpenCV to pre-process the image. The third image shows the convolutional neural network used to train and perform object detection. The 4th image shows the output of the network in the detection of blood cells. The last images indicate post-processing applied on the output image such as counting of each blood cells using the class label of each detection and quantification of morphological parameters like area, aspect ratio, and diameter of blood cells so that the final result provides the number of each blood cell types (seven) and morphological information providing valuable clinical information.

Journal ArticleDOI
TL;DR: In this article, a 3D spherical coordinate transform was proposed to improve the performance of CNNs for brain tumor segmentation. But, the model was not resolution-dependent, thus it was not able to deal with the transfer learning problems related to domain shifting.
Abstract: Magnetic Resonance Imaging (MRI) is used in everyday clinical practice to assess brain tumors. Deep Convolutional Neural Networks (DCNN) have recently shown very promising results in brain tumor segmentation tasks; however, DCNN models fail the task when applied to volumes that are different from the training dataset. One of the reasons is due to the lack of data standardization to adjust for different models and MR machines. In this work, a 3D spherical coordinates transform during the pre-processing phase has been hypothesized to improve DCNN models’ accuracy and to allow more generalizable results even when the model is trained on small and heterogeneous datasets and translated into different domains. Indeed, the spherical coordinate system avoids several standardization issues since it works independently of resolution and imaging settings. The model trained on spherical transform pre-processed inputs resulted in superior performance over the Cartesian-input trained model on predicting gliomas’ segmentation on Tumor Core and Enhancing Tumor classes, achieving a further improvement in accuracy by merging the two models together. The proposed model is not resolution-dependent, thus improving segmentation accuracy and theoretically solving some transfer learning problems related to the domain shifting, at least in terms of image resolution in the datasets.

Journal ArticleDOI
TL;DR: In this paper, the spatial covariance matrices are considered as features in order to extract the spatial information of the sEEG signals without applying any spatial filtering, and a kernel based on Riemannian geometry is proposed.
Abstract: This paper proposes a new framework for epileptic seizure detection using non-invasive scalp electroencephalogram (sEEG) signals. The major innovation of the current study is using the Riemannian geometry for transforming the covariance matrices estimated from the EEG channels into a feature vector. The spatial covariance matrices are considered as features in order to extract the spatial information of the sEEG signals without applying any spatial filtering. Since these matrices are symmetric and positive definite (SPD), they belong to a special manifold called the Riemannian manifold. Furthermore, a kernel based on Riemannian geometry is proposed. This kernel maps the SPD matrices onto the Riemannian tangent space. The SPD matrices, obtained from all channels of the segmented sEEG signals, have high dimensions and extra information. For these reasons, the sequential forward feature selection method is applied to select the best features and reduce the computational burden in the classification step. The selected features are fed into a support vector machine (SVM) with an RBF kernel to classify the feature vectors into seizure and non-seizure classes. The performance of the proposed method is evaluated using two long-term scalp EEG (CHB-MIT benchmark and private) databases. Experimental results on all 23 subjects of the CHB-MIT database reveal an accuracy of 99.87%, a sensitivity of 99.91%, and a specificity of 99.82%. In addition, the introduced algorithm is tested on the private sEEG signals recorded from 20 patients, having 1380 seizures. The proposed approach achieves an accuracy, a sensitivity, and a specificity of 98.14%, 98.16%, and 98.12%, respectively. The experimental results on both sEEG databases demonstrate the effectiveness of the proposed method for automated epileptic seizure detection, especially for the private database which has noisier signals in comparison to the CHB-MIT database.

Journal ArticleDOI
TL;DR: In this article, the effects of mindfulness meditation training in electrophysiological signals, recorded during a concentration task, were evaluated during a 25-h mindfulness-based stress reduction (MBSR) course, over a period of 8 weeks.
Abstract: In this paper, we evaluate the effects of mindfulness meditation training in electrophysiological signals, recorded during a concentration task. Longitudinal experiments have been limited to the analysis of psychological scores through depression, anxiety, and stress state (DASS) surveys. Here, we present a longitudinal study, confronting DASS survey data with electrocardiography (ECG), electroencephalography (EEG), and electrodermal activity (EDA) signals. Twenty-five university student volunteers (mean age = 26, SD = 7, 9 male) attended a 25-h mindfulness-based stress reduction (MBSR) course, over a period of 8 weeks. There were four evaluation periods: pre/peri/post-course and a fourth follow-up, after 2 months. All three recorded biosignals presented congruent results, in line with the expected benefits of regular meditation practice. In average, EDA activity decreased throughout the course, −64.5%, whereas the mean heart rate displayed a small reduction, −5.8%, possibly as a result of an increase in parasympathetic nervous system activity. Prefrontal (AF3) cortical alpha activity, often associated with calm conditions, saw a very significant increase, 148.1%. Also, the number of stressed and anxious subjects showed a significant decrease, −92.9% and −85.7%, respectively. Easy to practice and within everyone’s reach, this mindfulness meditation can be used proactively to prevent or enhance better quality of life.

Journal ArticleDOI
TL;DR: The results show that the algorithm based onCHSMM is a robust tool for monitoring of preterm infants in detecting apnea bradycardia episodes and a new set of equations for CHSMM to be integrated in a detection algorithm is introduced.
Abstract: In this paper, a method for apnea bradycardia detection in preterm infants is presented based on coupled hidden semi Markov model (CHSMM). CHSMM is a generalization of hidden Markov models (HMM) used for modeling mutual interactions among different observations of a stochastic process through using finite number of hidden states with corresponding resting time. We introduce a new set of equations for CHSMM to be integrated in a detection algorithm. The detection algorithm was evaluated on a simulated data to detect a specific dynamic and on a clinical dataset of electrocardiogram signals collected from preterm infants for early detection of apnea bradycardia episodes. For simulated data, the proposed algorithm was able to detect the desired dynamic with sensitivity of 96.67% and specificity of 98.98%. Furthermore, the method detected the apnea bradycardia episodes with 94.87% sensitivity and 96.52% specificity with mean time delay of 0.73 s. The results show that the algorithm based on CHSMM is a robust tool for monitoring of preterm infants in detecting apnea bradycardia episodes.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an automated system based on Gaussian mixture model superpixels for bleeding detection and segmentation of candidate regions. And the proposed system achieved 99.88% accuracy, 99.83% sensitivity, and 100% specificity.
Abstract: Wireless capsule endoscopy is the commonly employed modality in the treatment of gastrointestinal tract pathologies. However, the time taken for interpretation of these images is very high due to the large volume of images generated. Automated detection of disorders with these images can facilitate faster clinical interventions. In this paper, we propose an automated system based on Gaussian mixture model superpixels for bleeding detection and segmentation of candidate regions. The proposed system is realized with a classic binary support vector machine classifier trained with seven features including color and texture attributes extracted from the Gaussian mixture model superpixels of the WCE images. On detection of bleeding images, bleeding regions are segmented from them, by incrementally grouping the superpixels based on deltaE color differences. Tested with standard datasets, this system exhibits best performance compared to the state-of-the-art approaches with respect to classification accuracy, feature selection, computational time, and segmentation accuracy. The proposed system achieves 99.88% accuracy, 99.83% sensitivity, and 100% specificity signifying the effectiveness of the proposed system in bleeding detection with very few classification errors.

Journal ArticleDOI
TL;DR: In this paper, a comparison of several state-of-the-art machine learning classifiers is proposed, where stride data are collected by using a smartphone, and the main goal is to identify a robust methodology able to assure a suited classification of gait movements, in order to allow monitoring of patients in time as well as to discriminate among a pathological and physiological gait.
Abstract: This paper proposes a reliable monitoring scheme that can assist medical specialists in watching over the patient's condition. Although several technologies are traditionally used to acquire motion data of patients, the high costs as well as the large spaces they require make them difficult to be applied in a home context for rehabilitation. A reliable patient monitoring technique, which can automatically record and classify patient movements, is mandatory for a telemedicine protocol. In this paper, a comparison of several state-of-the-art machine learning classifiers is proposed, where stride data are collected by using a smartphone. The main goal is to identify a robust methodology able to assure a suited classification of gait movements, in order to allow the monitoring of patients in time as well as to discriminate among a pathological and physiological gait. Additionally, the advantages of smartphones of being compact, cost-effective and relatively easy to operate make these devices particularly suited for home-based rehabilitation programs. Graphical Abstract. This paper proposes a reliable monitoring scheme that can assist medical specialists in watching over the patient's condition. Although several technologies are traditionally used to acquire motion data of patients, the high costs as well as the large spaces they require make them difficult to be applied in a home context for rehabilitation. A reliable patient monitoring technique, which can automatically record and classify patient movements, is mandatory for a telemedicine protocol. In this paper, a comparison of several state-of-the-art machine learning classifiers is proposed, where stride data are collected and processed by using a smartphone(see figure). The main goal is to identify a robust methodology able to assure a suited classification of gait movements, in order to allow the monitoring of patients in time as well as to discriminate among a pathological and physiological gait. Additionally, the advantages of smartphones of being compact, cost-effective and relatively easy to operate make these devices particularly suited for home-based rehabilitation programs.