scispace - formally typeset
Search or ask a question

Showing papers in "Irbm in 2021"


Journal ArticleDOI
11 Jun 2021-Irbm
TL;DR: The proposed hybrid model provided more effective and improvement techniques for classification and with threshold-based segmentation in terms of detection and the overall accuracy of the hybrid CNN-SVM is obtained.
Abstract: Objective In this research paper, the brain MRI images are going to classify by considering the excellence of CNN on a public dataset to classify Benign and Malignant tumors. Materials and Methods Deep learning (DL) methods due to good performance in the last few years have become more popular for Image classification. Convolution Neural Network (CNN), with several methods, can extract features without using handcrafted models, and eventually, show better accuracy of classification. The proposed hybrid model combined CNN and support vector machine (SVM) in terms of classification and with threshold-based segmentation in terms of detection. Result The findings of previous studies are based on different models with their accuracy as Rough Extreme Learning Machine (RELM)-94.233%, Deep CNN (DCNN)-95%, Deep Neural Network (DNN) and Discrete Wavelet Autoencoder (DWA)-96%, k-nearest neighbors (kNN)-96.6%, CNN-97.5%. The overall accuracy of the hybrid CNN-SVM is obtained as 98.4959%. Conclusion In today's world, brain cancer is one of the most dangerous diseases with the highest death rate, detection and classification of brain tumors due to abnormal growth of cells, shapes, orientation, and the location is a challengeable task in medical imaging. Magnetic resonance imaging (MRI) is a typical method of medical imaging for brain tumor analysis. Conventional machine learning (ML) techniques categorize brain cancer based on some handicraft property with the radiologist specialist choice. That can lead to failure in the execution and also decrease the effectiveness of an Algorithm. With a brief look came to know that the proposed hybrid model provides more effective and improvement techniques for classification.

125 citations


Journal ArticleDOI
01 Apr 2021-Irbm
TL;DR: Deep learning methods are briefly introduced, a number of important deep learning approaches to solve super resolution problems are presented, different architectures as well as up-sampling operations will be introduced and the challenges to overcome are presented.
Abstract: Super resolution problems are widely discussed in medical imaging Spatial resolution of medical images are not sufficient due to the constraints such as image acquisition time, low irradiation dose or hardware limits To address these problems, different super resolution methods have been proposed, such as optimization or learning-based approaches Recently, deep learning methods become a thriving technology and are developing at an exponential speed We think it is necessary to write a review to present the current situation of deep learning in medical imaging super resolution In this paper, we first briefly introduce deep learning methods, then present a number of important deep learning approaches to solve super resolution problems, different architectures as well as up-sampling operations will be introduced Afterwards, we focus on the applications of deep learning methods in medical imaging super resolution problems, the challenges to overcome will be presented as well

94 citations


Journal ArticleDOI
01 Aug 2021-Irbm
TL;DR: A unique way to increase the performance of CNN models by applying some preprocessing on image dataset before sending to CNN architecture for feature extraction is suggested.
Abstract: Objectives Alzheimer's Disease (AD) is the most general type of dementia. In all leading countries, it is one of the primary reasons of death in senior citizens. Currently, it is diagnosed by calculating the MSME score and by the manual study of MRI Scan. Also, different machine learning methods are utilized for automatic diagnosis but existing has some limitations in terms of accuracy. So, main objective of this paper to include a preprocessing method before CNN model to increase the accuracy of classification. Materials and method In this paper, we present a deep learning-based approach for detection of Alzheimer's Disease from ADNI database of Alzheimer's disease patients, the dataset contains fMRI and PET images of Alzheimer's patients along with normal person's image. We have applied 3D to 2D conversion and resizing of images before applying VGG-16 architecture of Convolution neural network for feature extraction. Finally, for classification SVM, Linear Discriminate, K means clustering, and Decision tree classifiers are used. Results The experimental result shows that the average accuracy of 99.95% is achieved for the classification of the fMRI dataset, while the average accuracy of 73.46% is achieved with the PET dataset. On comparing results on the basis of accuracy, specificity, sensitivity and on some other parameters we found that these results are better than existing methods. Conclusions this paper, suggested a unique way to increase the performance of CNN models by applying some preprocessing on image dataset before sending to CNN architecture for feature extraction. We applied this method on ADNI database and on comparing the accuracies with other similar approaches it shows better results.

65 citations


Journal ArticleDOI
01 Oct 2021-Irbm
TL;DR: Experimental results evaluated using publicly available database show that the proposed CNN and RNN merging model with canonical correlation analysis determines higher accuracy compared to other state-of-the-art blood cell classification techniques.
Abstract: White Blood Cells play an important role in observing the health condition of an individual. The opinion related to blood disease involves the identification and characterization of a patient's blood sample. Recent approaches employ Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and merging of CNN and RNN models to enrich the understanding of image content. From beginning to end, training of big data in medical image analysis has encouraged us to discover prominent features from sample images. A single cell patch extraction from blood sample techniques for blood cell classification has resulted in the good performance rate. However, these approaches are unable to address the issues of multiple cells overlap. To address this problem, the Canonical Correlation Analysis (CCA) method is used in this paper. CCA method views the effects of overlapping nuclei where multiple nuclei patches are extracted, learned and trained at a time. Due to overlapping of blood cell images, the classification time is reduced, the dimension of input images gets compressed and the network converges faster with more accurate weight parameters. Experimental results evaluated using publicly available database show that the proposed CNN and RNN merging model with canonical correlation analysis determines higher accuracy compared to other state-of-the-art blood cell classification techniques.

57 citations


Journal ArticleDOI
08 Jan 2021-Irbm
TL;DR: The obtained results reveal that the proposed Computer-Aided-Diagnosis (CAD) tool is robust for the automatic detection and classification of breast cancer.
Abstract: Background and objective Breast cancer, the most intrusive form of cancer affecting women globally. Next to lung cancer, breast cancer is the one that provides a greater number of cancer deaths among women. In recent times, several intelligent methodologies were come into existence for building an effective detection and classification of such noxious type of cancer. For further improving the rate of early diagnosis and for increasing the life span of victims, optimistic light of research is essential in breast cancer classification. Accordingly, a new customized method of integrating the concept of deep learning with the extreme learning machine (ELM), which is optimized using a simple crow-search algorithm (ICS-ELM). Thus, to enhance the state-of-the-art workings, an improved deep feature-based crow-search optimized extreme learning machine is proposed for addressing the health-care problem. The paper pours a light-of-research on detecting the input mammograms as either normal or abnormal. Subsequently, it focuses on further classifying the type of abnormal severities i.e., benign type or malignant. Materials and methods The digital mammograms for this work are taken from the Curated Breast Imaging Subset of DDSM (CBIS-DDSM), Mammographic Image Analysis Society (MIAS), and INbreast datasets. Herein, the work employs 570 digital mammograms (250 normal, 200 benign and 120 malignant cases) from CBIS-DDSM dataset, 322 digital mammograms (207 normal, 64 benign and 51 malignant cases) from MIAS database and 179 full-field digital mammograms (66 normal, 56 benign and 57 malignant cases) from INbreast dataset for its evaluation. The work utilizes ResNet-18 based deep extracted features with proposed Improved Crow-Search Optimized Extreme Learning Machine (ICS-ELM) algorithm. Results The proposed work is finally compared with the existing Support Vector Machines (RBF kernel), ELM, particle swarm optimization (PSO) optimized ELM, and crow-search optimized ELM, where the maximum overall classification accuracy is obtained for the proposed method with 97.193% for DDSM, 98.137% for MIAS and 98.266% for INbreast datasets, respectively. Conclusion The obtained results reveal that the proposed Computer-Aided-Diagnosis (CAD) tool is robust for the automatic detection and classification of breast cancer.

48 citations


Journal ArticleDOI
27 Jan 2021-Irbm
TL;DR: In this article, a new Multiple Kernels-ELM-based Deep Neural Network (MKs-ELm-DNN) method is proposed for the detection of novel coronavirus disease through chest CT scanning images.
Abstract: Objectives Coronavirus disease is a fatal epidemic that has originated in Wuhan, China in December 2019. This disease is diagnosed using radiological images taken with the help of basic scanning methods besides the test kits for Reverse Transcription Polymerase Chain Reaction (RT-PCR). Automatic analysis of chest Computed Tomography (CT) images that are based on image processing technology plays an important role in combating this infectious disease. Material and methods In this paper, a new Multiple Kernels-ELM-based Deep Neural Network (MKs-ELM-DNN) method is proposed for the detection of novel coronavirus disease - also known as COVID-19, through chest CT scanning images. In the model proposed, deep features are extracted from CT scan images using a Convolutional Neural Network (CNN). For this purpose, pre-trained CNN-based DenseNet201 architecture, which is based on the transfer learning approach is used. Extreme Learning Machine (ELM) classifier based on different activation methods is used to calculate the architecture's performance. Lastly, the final class label is determined using the majority voting method for prediction of the results obtained from each architecture based on ReLU-ELM, PReLU-ELM, and TanhReLU-ELM. Results In experimental works, a public dataset containing COVID-19 and Non-COVID-19 classes was used to verify the validity of the MKs-ELM-DNN model proposed. According to the results obtained, the accuracy score was obtained as 98.36% using the MKs-ELM-DNN model. The results have demonstrated that, when compared, the MKs-ELM-DNN model proposed is proven to be more successful than the state-of-the-art algorithms and previous studies. Conclusion This study shows that the proposed Multiple Kernels-ELM-based Deep Neural Network model can effectively contribute to the identification of COVID-19 disease.

42 citations


Journal ArticleDOI
01 Oct 2021-Irbm
TL;DR: A two-stage decision support system to overcome the over-fitting issue and to optimize the generalization factor is proposed and applied to the HF subset of publicly available Cleveland heart disease database.
Abstract: Available clinical methods for heart failure (HF) diagnosis are expensive and require a high-level of experts intervention. Recently, various machine learning models have been developed for the prediction of HF where most of them have an issue of over-fitting. Over-fitting occurs when machine learning based predictive models show better performance on the training data yet demonstrate a poor performance on the testing data and the other way around. Developing a machine learning model which is able to produce generalization capabilities (such that the model exhibits better performance on both the training and the testing data sets) could overall minimize the prediction errors. Hence, such prediction models could potentially be helpful to cardiologists for the effective diagnose of HF. This paper proposes a two-stage decision support system to overcome the over-fitting issue and to optimize the generalization factor. The first stage uses a mutual information based statistical model while the second stage uses a neural network. We applied our approach to the HF subset of publicly available Cleveland heart disease database. Our experimental results show that the proposed decision support system has optimized the generalization capabilities and has reduced the mean percent error (MPE) to 8.8% which is significantly less than the recently published studies. In addition, our model exhibits a 93.33% accuracy rate which is higher than twenty eight recently developed HF risk prediction models that achieved accuracy in the range of 57.85% to 92.31%. We can hope that our decision support system will be helpful to cardiologists if deployed in clinical setup.

42 citations


Journal ArticleDOI
01 Feb 2021-Irbm
TL;DR: A new electrocardiogram (ECG) data compression scheme which employs sifting function based empirical mode decomposition (EMD) and discrete wavelet transform and offers better compression performance with preserving the key features of the signal very well.
Abstract: Objective In health-care systems, compression is an essential tool to solve the storage and transmission problems. In this regard, this paper reports a new electrocardiogram (ECG) data compression scheme which employs sifting function based empirical mode decomposition (EMD) and discrete wavelet transform. Method EMD based on sifting function is utilized to get the first intrinsic mode function (IMF). After EMD, the first IMF and four significant sifting functions are combined together. This combination is free from many irrelevant components of the signal. Discrete wavelet transform (DWT) with mother wavelet ‘bior4.4’ is applied to this combination. The transform coefficients obtained after DWT are passed through dead-zone quantization. It discards small transform coefficients lying around zero. Further, integer conversion of coefficients and run-length encoding are utilized to achieve a compressed form of ECG data. Results Compression performance of the proposed scheme is evaluated using 48 ECG records of the MIT-BIH arrhythmia database. In the comparison of compression results, it is observed that the proposed method exhibits better performance than many recent ECG compressors. A mean opinion score test is also conducted to evaluate the true quality of the reconstructed ECG signals. Conclusion The proposed scheme offers better compression performance with preserving the key features of the signal very well.

42 citations


Journal ArticleDOI
T. Liu1, J. Huang1, T. Liao1, R. Pu1, S. Liu1, Y. Peng1 
04 Jan 2021-Irbm
TL;DR: The results show that compared with the traditional DL model, the Hybrid DL model proposed in this paper is more accurate and efficient in predicting breast cancer subtypes.
Abstract: Background The prediction of breast cancer subtypes plays a key role in the diagnosis and prognosis of breast cancer. In recent years, deep learning (DL) has shown good performance in the intelligent prediction of breast cancer subtypes. However, most of the traditional DL models use single modality data, which can just extract a few features, so it cannot establish a stable relationship between patient characteristics and breast cancer subtypes. Dataset We used the TCGA-BRCA dataset as a sample set for molecular subtype prediction of breast cancer. It is a public dataset that can be obtained through the following link: https://portal.gdc.cancer.gov/projects/TCGA-BRCA Methods In this paper, a Hybrid DL model based on the multimodal data is proposed. We combine the patient's gene modality data with image modality data to construct a multimodal fusion framework. According to the different forms and states, we set up feature extraction networks respectively, and then we fuse the output of the two feature networks based on the idea of weighted linear aggregation. Finally, the fused features are used to predict breast cancer subtypes. In particular, we use the principal component analysis to reduce the dimensionality of high-dimensional data of gene modality and filter the data of image modality. Besides, we also improve the traditional feature extraction network to make it show better performance. Results The results show that compared with the traditional DL model, the Hybrid DL model proposed in this paper is more accurate and efficient in predicting breast cancer subtypes. Our model achieved a prediction accuracy of 88.07% in 10 times of 10-fold cross-validation. We did a separate AUC test for each subtype, and the average AUC value obtained was 0.9427. In terms of subtype prediction accuracy, our model is about 7.45% higher than the previous average.

30 citations


Journal ArticleDOI
01 Aug 2021-Irbm
TL;DR: A support vector machine (SVM) model with radial basis function (RBF) kernel appeared to be the most successful classifier by utilizing six features, namely, Body Mass Index, Age, Glucose, MCP-1, Resistin, and Insulin, which outperformed state-of-the-art methods reported in the literature.
Abstract: Breast cancer is one of the most prevalent types of cancers in females, which has become rampant all over the world in recent years. The survival rate of breast cancer patients degrades considerably for patients diagnosed at an advanced stage compared to those diagnosed at an early stage. The objective of this study is two folds. The first one is to find the most relevant biomarkers of breast cancer, which can be attained from regular blood analysis and anthropometric measurements. The other one is to improve the performance of current computer-aided diagnosis (CAD) system of early breast cancer detection. This study utilized a recent data set containing nine anthropometric and clinical attributes. In our methodology, first, we performed multicollinearity analysis and ranked the features based on the weighted average score obtained from four filter-based feature evaluation methods such as F-score, information gain, chi-square statistic, and Minimum Redundancy Maximum Relevance. Next, to improve the separability of the target classes, we scaled and weighted the dataset using min-max normalization and similarity-based attribute weighting by the k-means clustering algorithm, respectively. Finally, we trained standard machine learning (ML) models and evaluated the performance metrics by 10-fold cross-validation method. Our support vector machine (SVM) model with radial basis function (RBF) kernel appeared to be the most successful classifier by utilizing six features, namely, Body Mass Index (BMI), Age, Glucose, MCP-1, Resistin, and Insulin. The obtained classification accuracy, sensitivity, and specificity are 93.9% (95% CI: 93.2–94.6%), 95.1% (95% CI: 94.4–95.8%), and 94.0% (95% CI: 93.3–94.7%), respectively; these performance metrics outperformed state-of-the-art methods reported in the literature. The developed model could potentially assist the medical experts for the early diagnosis of breast cancer by employing a set of attributes that can be easily obtained from regular blood analysis and anthropometric measurements.

27 citations


Journal ArticleDOI
20 May 2021-Irbm
TL;DR: A survey of the use cases of IoT and Blockchain in the healthcare sector will serve as a state of the art for future researches and gives some directions to new possible researches that could help to revolutionize the Healthcare sector by using other technologies such as artificial intelligence, big data, fog and cloud computing.
Abstract: Objectives With the rapid evolution and technology advancement, the healthcare sector is evolving day by day. It is taking advantage of different technologies such as Internet of things and Blockchain. Several applications related to daily healthcare activities are adopting the use of these technologies. In this paper, we present a review in which we group different healthcare applications that integrate the Internet of things and Blockchain in their systems. Material and methods A review study about the integration of IoT and Blockchain in healthcare systems was conducted. We searched the databases ScienceDirect, IEEE Xplore, Google Scholar and ACM Digital Library. Results This review focuses on categorizing the use cases of IoT and Blockchain in the healthcare sector. The study listed 6 applications in medical services, namely, remote patient monitoring, electronic medical records management, disease prediction, patient tracking, drug traceability and fighting infectious disease especially COVID-19. The paper also investigates the challenges associated with the adoption of the Blockchain technology in healthcare IoT-based systems and some of the existing solutions. It also introduces some future research directions. Conclusion The survey of the use cases of IoT and Blockchain in the healthcare sector will serve as a state of the art for future researches. In addition, the paper gives some directions to new possible researches that could help to revolutionize the healthcare sector by using other technologies such as artificial intelligence, big data, fog and cloud computing.

Journal ArticleDOI
24 Apr 2021-Irbm
TL;DR: This paper proposes a novel deep learning approach to identify arrhythmia classes using Convolutional Neural Network (CNN) trained by two-dimensional ECG beat images that is more suitable for mobile device-based diagnosis systems as it does not involve any complex preprocessing process.
Abstract: Background Electrocardiogram (ECG) is a method of recording the electrical activity of the heart and it provides a diagnostic means for heart-related diseases. Arrhythmia is any irregularity of the heartbeat that causes an abnormality in the heart rhythm. Early detection of arrhythmia has great importance to prevent many diseases. Manual analysis of ECG recordings is not practical for quickly identifying arrhythmias that may cause sudden deaths. Hence, many studies have been presented to develop computer-aided-diagnosis (CAD) systems to automatically identify arrhythmias. Methods This paper proposes a novel deep learning approach to identify arrhythmias in ECG signals. The proposed approach identifies arrhythmia classes using Convolutional Neural Network (CNN) trained by two-dimensional (2D) ECG beat images. Firstly, ECG signals, which consist of 5 different arrhythmias, are segmented into heartbeats which are transformed into 2D grayscale images. Afterward, the images are used as input for training a new CNN architecture to classify heartbeats. Results The experimental results show that the classification performance of the proposed approach reaches an overall accuracy of 99.7%, sensitivity of 99.7%, and specificity of 99.22% in the classification of five different ECG arrhythmias. Further, the proposed CNN architecture is compared to other popular CNN architectures such as LeNet and ResNet-50 to evaluate the performance of the study. Conclusions Test results demonstrate that the deep network trained by ECG images provides outstanding classification performance of arrhythmic ECG signals and outperforms similar network architectures. Moreover, the proposed method has lower computational costs compared to existing methods and is more suitable for mobile device-based diagnosis systems as it does not involve any complex preprocessing process. Hence, the proposed approach provides a simple and robust automatic cardiac arrhythmia detection scheme for the classification of ECG arrhythmias.

Journal ArticleDOI
01 Aug 2021-Irbm
TL;DR: In this paper, a wavelet packet transform with run-length encoding (RLE), wavelet transform with Huffman encoding (Huffman encoding) and wavelet transformer with RLE were used for ECG data compression.
Abstract: Compression of an electrocardiogram (ECG) signal has given much consideration to the researchers since the computer-aided analysis of ECG has come into being. In some critical cases, viz., astronauts, a person under cardiac surveillance, ambulatory patients and in Holter monitoring system, continuous ECG data recording and transmitting from one location to other location is required. However, the size of the recorded data becomes so voluminous, that its transmission of data becomes practically impossible. In this paper, ECG data compression using wavelet-based techniques are presented, such that: a) wavelet packet transform with run-length encoding (RLE), b) wavelet transform with Huffman encoding, c) wavelet transform with RLE and d) wavelet transform and Lempel ZivWelch (LZW). The results have been tested using MIT-BIH (Massachusetts Institute of Technology/Beth Israel Hospital) arrhythmia databases. The performances of these methodologies are examined in the quantitative and qualitative manner. From Tabular results, it can be observed that the methodology based on WT and LZW provides efficient results in terms of compression ratio ( CR ∼ = 20 to 30 ) and peak root mean square difference ( PRD ∼ = 0.01 to 1.8 ) both, hence the overall QS value is improved.

Journal ArticleDOI
01 Feb 2021-Irbm
TL;DR: This review is focused on heart rate measurement methods located on forearm and more specifically on the wrist, and the superposition of motion artefacts over the signal of interest is one of the largest drawbacks for these methods, when used out of laboratory conditions.
Abstract: When evaluating general health condition on a patient, heart rate is an essential indicator as it is directly representative of the cardiac system state. Continuous measurement methods of heart rate are required for ambulatory monitoring involved in preliminary diagnostic indicators of cardiac diseases or stroke. The growing number of recent developments in wearable devices is reflective of the increasing demand in wrist-worn activity trackers for fitness and health applications. Indeed, the wrist represents a convenient location in terms of form factor and acceptability for patients. While most commercially-available devices are based on optical methods for heart rate measurement, others methods were also developed, based on various physiological phenomena. This review is focused on heart rate measurement methods located on forearm and more specifically on the wrist. For each method, the physiological mechanism involved is described, and the associated transducers for bio-signal acquisition as well as practical developments and prototypes are presented. Methods are discussed on their advantages, limitations and their suitability for an ambulatory use. More specifically, the superposition of motion artefacts over the signal of interest is one of the largest drawbacks for these methods, when used out of laboratory conditions. As such, artefact reduction techniques proposed in the literature are also presented and discussed.

Journal ArticleDOI
19 May 2021-Irbm
TL;DR: This study used Deep Convolutional Neural Networks (DNNs) to classify Acute Lymphoblastic Leukaemia (ALL) according to WHO classification scheme without using any image segmentation and feature extraction techniques.
Abstract: Purpose Leukaemia is diagnosed conventionally by observing the peripheral blood and bone marrow smear using a microscope and with the help of advanced laboratory tests. Image processing-based methods, which are simple, fast, and cheap, can be used to detect and classify leukemic cells by processing and analysing images of microscopic smear. The proposed study aims to classify Acute Lymphoblastic Leukaemia (ALL) by Deep Learning (DL) based techniques. Procedures The study used Deep Convolutional Neural Networks (DNNs) to classify ALL according to WHO classification scheme without using any image segmentation and feature extraction that involves intense computations. Images from an online image bank of American Society of Haematology (ASH) were used for the classification. Findings A classification accuracy of 94.12% is achieved by the study in isolating the B-cell and T-cell ALL images using a pretrained CNN AlexNet as well as LeukNet, a custom-made deep learning network designed by the proposed work. The study also compared the classification performances using three different training algorithms. Conclusions The paper detailed the use of DNNs to classify ALL, without using any image segmentation and feature extraction techniques. Classification of ALL into subtypes according to the WHO classification scheme using image processing techniques is not available in literature to the best of the knowledge of the authors. The present study considered the classification of ALL only, and detection of other types of leukemic images can be attempted in future research.

Journal ArticleDOI
29 Apr 2021-Irbm
TL;DR: A novel dual-stream convolutional neural network (DCNN) is proposed, which can use time domain signal and frequency domain signal as the inputs, and the extracted time- domain features and frequency-domain features are fused by linear weighting for classification training.
Abstract: Background and objective An important task of the brain-computer interface (BCI) of motor imagery is to extract effective time-domain features, frequency-domain features or time-frequency domain features from the raw electroencephalogram (EEG) signals for classification of motor imagery. However, choosing an appropriate method to combine time domain and frequency domain features to improve the performance of motor imagery recognition is still a research hotspot. Methods In order to fully extract and utilize the time-domain and frequency-domain features of EEG in classification tasks, this paper proposed a novel dual-stream convolutional neural network (DCNN), which can use time domain signal and frequency domain signal as the inputs, and the extracted time-domain features and frequency-domain features are fused by linear weighting for classification training. Furthermore, the weight can be learned by the DCNN automatically. Results The experiments based on BCI competition II dataset III and BCI competition IV dataset 2a showed that the model proposed by this study has better performance than other conventional methods. The model used time-frequency signal as the inputs had better performance than the model only used time-domain signals or frequency-domain signals. The accuracy of classification was improved for each subject compared with the models only used one signals as the inputs. Conclusions Further analysis shown that the fusion weight of different subject is specifically, adjusting the weight coefficient automatically is helpful to improve the classification accuracy.

Journal ArticleDOI
12 Feb 2021-Irbm
TL;DR: In this article, a powder-mixed electric discharge machining (PMEDM) was used to modify the surface characteristics of Mg-4Zn alloy using Zirconium (Zr) and manganese powder particles.
Abstract: Objectives Magnesium alloys are the potential candidate for metallic implants due to their excellent mechanical characteristics, biodegradable nature, and properties similar to human bone. However, a high degradation rate is primary obstacle in implementing these alloys as biodegradable orthopedic implants. Powder-mixed electric discharge machining (PMEDM) is an emerging method of surface modification of metallic alloys that can be implemented to improve the corrosion resistance of Mg alloys. Therefore, PMEDM using zirconium (Zr) and manganese (Mn) powder particles has been proposed to modify the surface characteristics of Mg-4Zn alloy. Materials and Methods In the present work, Zr and Mn powders have been used in varying concentrations during PMEDM of Mg-4Zn alloy. Experiments were conducted as per mixed design L18 orthogonal array (OA). Taguchi and Grey Relational Analysis (GRA) have been used to optimize the process parameters. Analysis of response characteristics, namely material removal rate (MRR), surface roughness (SR), and thickness of the alloyed layer (TAL), has been carried out at different values of input variables (like powder additives (Pa), powder concentration (Cp), peak current (Ip), pulse on time (Ton) and duty cycle (DC)). The corrosion analysis was carried out by immersing the specimen (machined at an optimized setting) in simulated body fluid (SBF). Results It is observed from the analysis that Cp, Ip, and Ton play a pivotal role in evaluating response characteristics. The favorable setting suggested by the gray approach is Pa: Zr; Cp: 2 g/l; Ip: 4A; Ton: 50 μs; DC: 80%, while responses at this setting are confirmed by confirmation experiments with MRR: 32.14 mm3/min; SR: 5.578 μm and TAL: 8.28 μm. The immersion test signifies that the corrosion rate (CR) of PMEDMed sample (3.20 mm/year) is 40.74% lesser than the corrosion rate of polished sample (5.40 mm/year). Conclusion Zr powder shows better performance in terms of higher MRR, lower SR and higher TAL as compared to Mn powder during the PMEDM process. The corroded surface of polished sample exhibited larger size micro pits and cracks than the machined sample, which concluded that surface modification of MZ-4Zn alloy via PMEDM is a powerful tool to enhance its corrosion resistance.

Journal ArticleDOI
26 Apr 2021-Irbm
TL;DR: A binary grade classifier is trained and the coherence of radiologic criteria for low grade versus high grade classification under WHO terms is highlighted, showing how the histogram of prediction scores and crossed prediction scores can be used as tools for data exploration and performance evaluation.
Abstract: Objectives Glioma grading using maching learning on magnetic resonance data is a growing topic. According to the World Health Organization (WHO), the classification of glioma discriminates between low grade gliomas (LGG), grades I, II; and high grade gliomas (HGG), grades III, IV, leading to major issues in oncology for therapeutic management of patients. A well-known dataset for machine-based grade prediction is the MICCAI Brain Tumor Segmentation (BraTS) dataset. However this dataset is not divided into WHO-defined LGG and HGG, since it combines grades I, II and III as “lower grades gliomas”, while its HGG category only presents grade IV glioblastoma multiform. In this paper we want to train a binary grade classifier and investigate the consistency of the original BraTS labels with radiologic criteria using machine-aided predictions. Material and methods Using WHO-based radiomic features, we trained a SVM classifier on the BraTS dataset, and used the prediction score histogram to investigate the behaviour of our classifier on the lower grade population. We also asked 5 expert radiologists to annotate BraTS images between low (as opposed to lower) grade and high grade glioma classes, resulting in a new groundtruth. Results Our first training reached 84.1% accuracy. The prediction score histogram allows us to identify the radiologically high grade patients among the original lower grade population of the BraTS dataset. Training another SVM on our new radiologically WHO-aligned groundtruth shows robust performances despite important class imbalance, reaching 82.4% accuracy. Conclusion Our results highlight the coherence of radiologic criteria for low grade versus high grade classification under WHO terms. We also show how the histogram of prediction scores and crossed prediction scores can be used as tools for data exploration and performance evaluation. Therefore, we propose to use our radiological groundtruth for future development on binary glioma grading.

Journal ArticleDOI
16 Jan 2021-Irbm
TL;DR: A dual tree complex wavelet transform based filter bank to filter the EEG into sub bands, instead of traditional filtering methods, which improved the spatial feature extraction efficiency and suggest that the proposed algorithm can improve the performance of MI-based Brain-computer interface devices.
Abstract: Background Frequency band optimization improves the performance of common spatial pattern (CSP) in motor imagery (MI) tasks classification because MI-related electroencephalograms (EEGs) are highly frequency specific. Many variants of CSP algorithm divided the EEG into various sub bands and then applied CSP. However, the feature dimension of MI-EEG data increases with addition of frequency sub bands and requires efficient feature selection algorithms. The performance of CSP also depends on filtering techniques. Method In this study, we designed a dual tree complex wavelet transform based filter bank to filter the EEG into sub bands, instead of traditional filtering methods, which improved the spatial feature extraction efficiency. Further, after filtering EEG into different sub bands, we extracted spatial features from each sub band using CSP and optimized them by a proposed supervised learning framework based on neighbourhood component analysis (NCA). Subsequently, a support vector machine (SVM) is trained to perform classification. Results An experimental study, conducted on two datasets (BCI Competition IV (Dataset 2b), and BCI competition III (Dataset IIIa)), validated the MI classification effectiveness of the proposed method in comparison with standard algorithms such as CSP, Filter bank CSP (CSP), and Discriminative FBCSP (DFBCSP). The average classification accuracy obtained by the proposed method for BCI Competition IV (Dataset 2b), and BCI Competition III (Dataset IIIa) are 84.02 ± 12.2 and 89.1 ± 7.50, respectively and found significant than that achieved by standard methods. Conclusion Achieved superior results suggest that the proposed algorithm can improve the performance of MI-based Brain-computer interface devices.

Journal ArticleDOI
01 Oct 2021-Irbm
TL;DR: An adaptive singular spectrum analysis (SSA) algorithm is proposed to remove muscle artifact from single channel EEG and is able to discriminate between various contamination levels present in EEG and performed comparatively better than the existing single channel algorithms.
Abstract: Background Electroencephalogram (EEG) signals are obtained from the scalp surface to study various neuro-physiological functions of brain. Often, these signals are obscured by the other physiological signals of the subject from heart, eye and facial muscles. Hence, the successive applications of EEG are adversely affected. The wide spectrum and high amplitude variation of muscle artifact overlaps EEG both in spectral and temporal domain. Objective In this paper, an adaptive singular spectrum analysis (SSA) algorithm is proposed to remove muscle artifact from single channel EEG. The mobility threshold for the SSA routine is decided adaptively using a neural network regressor (NNR). The NNR is trained using the features of contaminated EEG with various levels of contamination for better approximation of the reconstructed EEG signal. Results The proposed algorithm is validated using both simulated and experimental data. Parameters like relative root mean square error ( R R M S E ), correlation coefficient ( C f ), peak signal to noise ratio ( P S N R ), and mutual information (MI) along with graphical results are used to evaluate the performance of the proposed algorithm. The proposed algorithm is found to be having consistent and better performance while the other algorithms show a decline in performance with high level of contamination. Conclusion The algorithm upon testing with both simulated and experimental data, is able to discriminate between various contamination levels present in EEG and performed comparatively better than the existing single channel algorithms.

Journal ArticleDOI
01 Aug 2021-Irbm
TL;DR: A computationally efficient Correlational Neural Network learning model and an automated diagnosis system for detecting Chronic Kidney Disease and the use of the SVM classifier has improved the capability of the network to make predictions more accurately.
Abstract: Objectives In this paper, we propose a computationally efficient Correlational Neural Network (CorrNN) learning model and an automated diagnosis system for detecting Chronic Kidney Disease (CKD). A Support Vector Machine (SVM) classifier is integrated with the CorrNN model for improving the prediction accuracy. Material and methods The proposed hybrid model is trained and tested with a novel sensing module. We have monitored the concentration of urea in the saliva sample to detect the disease. Experiments are carried out to test the model with real-time samples and to compare its performance with conventional Convolutional Neural Network (CNN) and other traditional data classification methods. Results The proposed method outperforms the conventional methods in terms of computational speed and prediction accuracy. The CorrNN-SVM combined network achieved a prediction accuracy of 98.67%. The experimental evaluations show a reduction in overall computation time of about 9.85% compared to the conventional CNN algorithm. Conclusion The use of the SVM classifier has improved the capability of the network to make predictions more accurately. The proposed framework substantially advances the current methodology, and it provides more precise results compared to other data classification methods.

Journal ArticleDOI
01 Jun 2021-Irbm
TL;DR: An automatic system is introduced in order to classify helitrons families in C. elegans genome, based on a combination between machine learning approaches and features extracted from DNA-sequences, which is particularly based on Frequency Chaos Game Representation DNA-images.
Abstract: Helitrons, eukaryotic transposable elements (TEs) transposed by rolling-circle mechanism, have been found in various species with highly variable copy numbers and sometimes with a large portion of their genomes. The impact of helitrons sequences in the genome is to frequently capture host genes during their transposition. Since their discovery, 18 years ago, by computational analysis of whole genome sequences of Arabidopsis thaliana plant and Caenorhabditis elegans (C. elegans) nematode, the identification and classification of these mobile genetic elements remain a challenge due to the fact that the wide majority of their families are non-autonomous. In C. elegans genome, DNA helitrons sequences possess great variability in terms of length that varies between 11 and 8965 base pairs (bps) from one sequence to another. In this work, we develop a new method to predict helitrons DNA-sequences, which is particularly based on Frequency Chaos Game Representation (FCGR) DNA-images. Thus, we introduce an automatic system in order to classify helitrons families in C. elegans genome, based on a combination between machine learning approaches and features extracted from DNA-sequences. Consequently, the new set of helitrons features (the FCGR images and K-mers) are extracted from DNA sequences. These helitrons features consist of the frequency apparition number of K nucleotides pairs (Tandem Repeat) in the DNA sequences. Indeed, three different classifiers are used for the classification of all existing helitrons families. The results have shown potential global score equal to 72.7% due to FCGR images which constitute helitrons features and the pre-trained neural network as a classifier. The two other classifiers demonstrate that their efficiency reaches 68.7% for Support Vector Machine (SVM) and 91.45% for Random Forest (RF) algorithms using the K-mers features corresponding to the genomic sequences.

Journal ArticleDOI
18 Jun 2021-Irbm
TL;DR: The proposed smartphone app provides a cost-effective and widely accessible mobile platform for early screening of glaucoma in remote clinics or areas with limited access to fundus cameras and ophthalmologists.
Abstract: The glaucoma is an eye disease that causes blindness when it progresses in an advanced stage. Early glaucoma diagnosis is essential to prevent the vision loss. However, early detection is not covered due to the lack of ophthalmologists and the limited accessibility to retinal image capture devices. In this paper, we present an automated method for glaucoma screening dedicated for Smartphone Captured Fundus Images (SCFIs). The implementation of the method into a smartphone associated to an optical lens for retina capturing leads to a mobile aided screening system for glaucoma. The challenge consists in insuring higher performance detection despite the moderate quality of SCFIs, with a reduced execution time to be adequate for the clinical use. The main idea consists in deducing glaucoma based on the vessel displacement inside the Optic Disk (OD), where the vessel tree remains sufficiently modeled on SCFIs. Within this objective, our major contribution consists in proposing: (1) a robust processing for locating vessel centroids in order to adequately model the vessel distribution, and (2) a feature vector that relevantly reflect two main glaucoma biomarkers in terms of vessel displacement. Furthermore, all processing steps are carefully chosen based on lower complexity, to be suitable for fast clinical screening. A first evaluation of our method is performed using the two public DRISHTI-DB and DRIONS-DB databases, where 99% and 95% accuracy, 96.77% and 97,5% specificity and 100% and 95% sensitivity are respectively achieved. Thereafter, the method is evaluated using two fundus image databases respectively captured through a smartphone and retinograph for the same persons. We achieve 100% accuracy using both databases which assesses the robustness of our method. In addition, the detection is performed on 0.027 and 0.029 second when executed respectively on the Samsung-M51 on the Samsung-A70 smartphone devices. Our proposed smartphone app provides a cost-effective and widely accessible mobile platform for early screening of glaucoma in remote clinics or areas with limited access to fundus cameras and ophthalmologists.

Journal ArticleDOI
09 Jul 2021-Irbm
TL;DR: In this paper, the EEG signal was acquired in two different test conditions such as before examination with 12 minutes and after examination with 3 minutes from 14 subjects with eight electrodes located using wireless Enobio device (Neuro electrics) with 10-20 international lead system.
Abstract: Background Adolescence is a crucial chapter in life and the presence of stress, depression, and anxiety at this stage is a great concern. Prolonged stress is one of the risk factors that may induce suicidal thoughts, destructive ideation, abuse of alcohol, and drugs in adulthood. Based on a record from National Crime Records Bureau, In India over 2320 children were committed suicide per year because of failure in examinations. This raised number implies the severity of this issue and its major impact on society. Objectives The main objective of this paper is to analyze the cognitive stress in students during examination period using EEG biomarkers. Methods and Results EEG signal was acquired in two different test conditions such as before examination with 12 minutes and after examination with 3 minutes from 14 subjects with eight electrodes located using wireless Enobio device (Neuro electrics) with 10-20 international lead system. The three brain waves such as theta, alpha, beta relative band energies were considered, and EEG band ratios such as heart rate, neural activity, arousal index, vigilance index and cognitive performance attentional resource index extracted between before and after examination condition using db4 wavelet family with 6 level decomposition. The statistical results suggest that after examination the relative sub-band energies α, β, and θ were decreased significantly (p Conclusion The experimental results found that the memory and concentration were high before examination, which concludes that adolescence group examination stress was high before examination period as compared to after examination. In the case of gender group comparison, theta energy band for male students was found high compared to female students in before examination state such that it concludes that male students were highly stressed (before examination) than female students. Overall, our results suggest that after examination male students with lower heart rate index than female students which implies the male students control their stress levels as compared to females in the same stress situation.

Journal ArticleDOI
01 Oct 2021-Irbm
TL;DR: Modified fuzzy Q learning algorithm in conjunction with wavelet based pre-processing has been used to build a classfier for identifying pneumonia and tuberculosis's severity using a repository of X ray images.
Abstract: This work proposes reinforcement learning for correctly identifying pneumonia and tuberculosis (TB) using a repository of X ray images. To our knowledge, this is a first attempt at employing reinforcement learning for pneumonia and TB classification. In particular, modified fuzzy Q learning (MFQL) algorithm in conjunction with wavelet based pre-processing has been used to build a classifier for identifying pneumonia and tuberculosis's severity. Proposed classifier is a self-learning one and uses pneumonia dataset (no pneumonia, mild pneumonia and severe pneumonia) and tuberculosis dataset (TB present, TB absent) samples to classify X ray images of subjects. Results indicate that MFQL based approach achieves high accuracy and fares much better over contemporary Support Vector Machine (SVM) and k-Nearest Neighbor (KNN) classifiers. Proposed classifier can be a useful tool for pneumonia and tuberculosis diagnosis in a practical setting.

Journal ArticleDOI
01 Aug 2021-Irbm
TL;DR: The proposed approach for retrieving similar biomedical images based on the Zernike moment features, curvelet features and histogram of oriented gradients (HOG) feature achieved a better retrieval rate on all the four databases.
Abstract: Biomedical image retrieval is a crucial side of computer-aided diagnosis. It helps the radiologist and medical specialist to spot and perceive the specific disease. This paper proposed an efficient approach for retrieving similar biomedical images based on the Zernike moment features, curvelet features and histogram of oriented gradients (HOG) feature. The Zernike polynomials based moments set defines the Zernike moment which is a global descriptor and it is capable of extracting both texture and shape information with minimum redundant data. The curvelet transformation is used to compute the edge-based shape information in form of curvelet histogram for the curves with discontinuity and the HOG features calculate the happenings of gradient orientation in the local areas of an image. The experiments were conducted on four benchmark biomedical image databases: HRCT dataset, Emphysema CT database, OASIS MRI database and NEMA MRI database respectively. The performance of the proposed approach was compared with many existing methods and achieved a better retrieval rate on all the four databases.

Journal ArticleDOI
16 Jun 2021-Irbm
TL;DR: A methodological review of different ECG data compression techniques based on their experimental performance on ECG records of the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database and includes different validation methods of ECG compression techniques.
Abstract: Objective: Globally, cardiovascular diseases (CVDs) are one of the most leading causes of death. In medical screening and diagnostic procedures of CVDs, electrocardiogram (ECG) signals are widely used. Early detection of CVDs requires acquisition of longer ECG signals. It has triggered the development of personal healthcare systems which can be used by cardio-patients to manage the disease. These healthcare systems continuously record, store, and transmit the ECG data via wired/wireless communication channels. There are many issues with these systems such as data storage limitation, bandwidth limitation and limited battery life. Involvement of ECG data compression techniques can resolve all these issues. Method: In the past, numerous ECG data compression techniques have been proposed. This paper presents a methodological review of different ECG data compression techniques based on their experimental performance on ECG records of the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Results: It is observed that experimental performance of different compression techniques depends on several parameters. The existing compression techniques are validated using different distortion measures. Conclusion: This study elaborates advantages and disadvantages of different ECG data compression techniques. It also includes different validation methods of ECG compression techniques. Although compression techniques have been developed very widely but the validation of compression methods is still a prospective research area to accomplish an efficient and reliable performance.

Journal ArticleDOI
27 Jul 2021-Irbm
TL;DR: A systematic review is provided on the BCI speller system and it includes speller paradigms, feature extraction, feature optimization and classification techniques for BCIspeller.
Abstract: Brain-computer interface (BCI) speller is a system that provides an alternative communication for the disable people. The brain wave is translated into machine command through a BCI speller which can be used as a communication medium for the patients to express their thought without any motor movement. A BCI speller aims to spell characters by using the electroencephalogram (EEG) signal. Several types of BCI spellers are available based on the EEG signal. A standard BCI speller system consists of the following elements: BCI speller paradigm, data acquisition system and signal processing algorithms. In this work, a systematic review is provided on the BCI speller system and it includes speller paradigms, feature extraction, feature optimization and classification techniques for BCI speller. The advantages and limitations of different speller paradigm and machine learning algorithms are discussed in this article. Also, the future research directions are discussed which can overcome the limitations of present state-of-the-art techniques for BCI speller.

Journal ArticleDOI
15 Mar 2021-Irbm
TL;DR: A comparative analysis among different feature extraction techniques and classification algorithms for MI-based EEG signals indicates a significant accuracy level performance improvement of the proposed methods with respect to the existing one.
Abstract: Objective The initial principal task of a Brain-Computer Interfacing (BCI) research is to extract the best feature set from a raw EEG (Electroencephalogram) signal so that it can be used for the classification of two or multiple different events. The main goal of the paper is to develop a comparative analysis among different feature extraction techniques and classification algorithms. Materials and methods In this present investigation, four different methodologies have been adopted to classify the recorded MI (motor imagery) EEG signal, and their comparative study has been reported. Haar Wavelet Energy (HWE), Band Power, Cross-correlation, and Spectral Entropy (SE) based Cross-correlation feature extraction techniques have been considered to obtain the necessary features set from the raw EEG signals. Four different machine learning algorithms, viz. LDA (Linear Discriminant Analysis), QDA (Quadratic Discriminant Analysis), Naive Bayes, and Decision Tree, have been used to classify the features. Results The best average classification accuracies are 92.50%, 93.12%, 72.26%, and 98.71% using the four methods. Further, these results have been compared with some recent existing methods. Conclusion The comparative results indicate a significant accuracy level performance improvement of the proposed methods with respect to the existing one. Hence, this presented work can guide to select the best feature extraction method and the classifier algorithm for MI-based EEG signals.

Journal ArticleDOI
01 Jun 2021-Irbm
TL;DR: A novel fuzzy methodology IFFP (Improved Fuzzy Frequent Pattern Mining), based on a fuzzy association rule mining for biological knowledge extraction, is introduced to analyze the dataset in order to find the core factors that cause breast cancer.
Abstract: Background: Breast cancer, a type of malignant tumor, affects women more than men. About one third of women with breast cancer die of this disease. Hence, it is imperative to find a tool for the proper identification and early treatment of breast cancer. Unlike the conventional data mining algorithms, fuzzy logic based approaches help in the mining of association rules from quantitative transactions. Methods: In this study a novel fuzzy methodology IFFP (Improved Fuzzy Frequent Pattern Mining), based on a fuzzy association rule mining for biological knowledge extraction, is introduced to analyze the dataset in order to find the core factors that cause breast cancer. This method consists of two phases. During the first phase, fuzzy frequent itemsets are mined using the proposed algorithm IFFP. Fuzzy association rules are formed during the second phase, indicating whether a person belongs to benign or malignant. This algorithm is applied on WBCD (Wisconsin Breast Cancer Database) to detect the presence of breast cancer. Results: It is determined that the factor, Mitoses has low range of values on both malignant and benign and hence it does not contribute to the detection of breast cancer. On the other hand, the high range of Bare Nuclei shows more chances for the presence of breast cancer. Conclusion: Experimental evaluations on real datasets show that our proposed method outperforms recently proposed state-of-the-art algorithms in terms of runtime and memory usage.