scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Healthcare Engineering in 2022"


Journal ArticleDOI
TL;DR: It is proposed in this study that a unique intelligent diabetes mellitus prediction framework (IDMPF) is developed using machine learning after conducting a rigorous review of existing prediction models in the literature and examining their applicability to diabetes.
Abstract: Diabetes is a chronic disease that continues to be a significant and global concern since it affects the entire population's health. It is a metabolic disorder that leads to high blood sugar levels and many other problems such as stroke, kidney failure, and heart and nerve problems. Several researchers have attempted to construct an accurate diabetes prediction model over the years. However, this subject still faces significant open research issues due to a lack of appropriate data sets and prediction approaches, which pushes researchers to use big data analytics and machine learning (ML)-based methods. Applying four different machine learning methods, the research tries to overcome the problems and investigate healthcare predictive analytics. The study's primary goal was to see how big data analytics and machine learning-based techniques may be used in diabetes. The examination of the results shows that the suggested ML-based framework may achieve a score of 86. Health experts and other stakeholders are working to develop categorization models that will aid in the prediction of diabetes and the formulation of preventative initiatives. The authors perform a review of the literature on machine models and suggest an intelligent framework for diabetes prediction based on their findings. Machine learning models are critically examined, and an intelligent machine learning-based architecture for diabetes prediction is proposed and evaluated by the authors. In this study, the authors utilize our framework to develop and assess decision tree (DT)-based random forest (RF) and support vector machine (SVM) learning models for diabetes prediction, which are the most widely used techniques in the literature at the time of writing. It is proposed in this study that a unique intelligent diabetes mellitus prediction framework (IDMPF) is developed using machine learning. According to the framework, it was developed after conducting a rigorous review of existing prediction models in the literature and examining their applicability to diabetes. Using the framework, the authors describe the training procedures, model assessment strategies, and issues associated with diabetes prediction, as well as solutions they provide. The findings of this study may be utilized by health professionals, stakeholders, students, and researchers who are involved in diabetes prediction research and development. The proposed work gives 83% accuracy with the minimum error rate.

74 citations


Journal ArticleDOI
TL;DR: A new image compression scheme called the GenPSOWVQ method that uses a recurrent neural network with wavelet VQ that attains precise compression while maintaining image accuracy with lower computational costs when encoding clinical images.
Abstract: Medical diagnosis is always a time and a sensitive approach to proper medical treatment. Automation systems have been developed to improve these issues. In the process of automation, images are processed and sent to the remote brain for processing and decision making. It is noted that the image is written for compaction to reduce processing and computational costs. Images require large storage and transmission resources to perform their operations. A good strategy for pictures compression can help minimize these requirements. The question of compressing data on accuracy is always a challenge. Therefore, to optimize imaging, it is necessary to reduce inconsistencies in medical imaging. So this document introduces a new image compression scheme called the GenPSOWVQ method that uses a recurrent neural network with wavelet VQ. The codebook is built using a combination of fragments and genetic algorithms. The newly developed image compression model attains precise compression while maintaining image accuracy with lower computational costs when encoding clinical images. The proposed method was tested using real-time medical imaging using PSNR, MSE, SSIM, NMSE, SNR, and CR indicators. Experimental results show that the proposed GenPSOWVQ method yields higher PSNR SSIMM values for a given compression ratio than the existing methods. In addition, the proposed GenPSOWVQ method yields lower values of MSE, RMSE, and SNR for a given compression ratio than the existing methods.

65 citations


Journal ArticleDOI
TL;DR: The IntOPMICM technique is introduced, a new image compression scheme that combines GenPSO and VQ that produces higher PSNR SSIM values for a given compression ratio than existing methods, according to experimental data.
Abstract: Due to the increasing number of medical imaging images being utilized for the diagnosis and treatment of diseases, lossy or improper image compression has become more prevalent in recent years. The compression ratio and image quality, which are commonly quantified by PSNR values, are used to evaluate the performance of the lossy compression algorithm. This article introduces the IntOPMICM technique, a new image compression scheme that combines GenPSO and VQ. A combination of fragments and genetic algorithms was used to create the codebook. PSNR, MSE, SSIM, NMSE, SNR, and CR indicators were used to test the suggested technique using real-time medical imaging. The suggested IntOPMICM approach produces higher PSNR SSIM values for a given compression ratio than existing methods, according to experimental data. Furthermore, for a given compression ratio, the suggested IntOPMICM approach produces lower MSE, RMSE, and SNR values than existing methods.

65 citations


Journal ArticleDOI
TL;DR: A segmentation and detection method for brain tumors was developed using images from the MRI sequence as an input image to identify the tumor area using the gray-level-co-occurrence matrix (GLCM) method.
Abstract: Radiology is a broad subject that needs more knowledge and understanding of medical science to identify tumors accurately. The need for a tumor detection program, thus, overcomes the lack of qualified radiologists. Using magnetic resonance imaging, biomedical image processing makes it easier to detect and locate brain tumors. In this study, a segmentation and detection method for brain tumors was developed using images from the MRI sequence as an input image to identify the tumor area. This process is difficult due to the wide variety of tumor tissues in the presence of different patients, and, in most cases, the similarity within normal tissues makes the task difficult. The main goal is to classify the brain in the presence of a brain tumor or a healthy brain. The proposed system has been researched based on Berkeley's wavelet transformation (BWT) and deep learning classifier to improve performance and simplify the process of medical image segmentation. Significant features are extracted from each segmented tissue using the gray-level-co-occurrence matrix (GLCM) method, followed by a feature optimization using a genetic algorithm. The innovative final result of the approach implemented was assessed based on accuracy, sensitivity, specificity, coefficient of dice, Jaccard's coefficient, spatial overlap, AVME, and FoM.

55 citations


Journal ArticleDOI
TL;DR: This paper summarizes the medical image segmentation technologies based on the U-Net structure variants concerning their structure, innovation, efficiency, etc.
Abstract: Deep learning has been extensively applied to segmentation in medical imaging. U-Net proposed in 2015 shows the advantages of accurate segmentation of small targets and its scalable network architecture. With the increasing requirements for the performance of segmentation in medical imaging in recent years, U-Net has been cited academically more than 2500 times. Many scholars have been constantly developing the U-Net architecture. This paper summarizes the medical image segmentation technologies based on the U-Net structure variants concerning their structure, innovation, efficiency, etc.; reviews and categorizes the related methodology; and introduces the loss functions, evaluation parameters, and modules commonly applied to segmentation in medical imaging, which will provide a good reference for the future research.

54 citations


Journal ArticleDOI
TL;DR: This work presents a review of the literature in the field of medical image segmentation employing deep convolutional neural networks, and examines the various widely used medical image datasets, the different metrics used for evaluating the segmentation tasks, and performances of different CNN based networks.
Abstract: Image segmentation is a branch of digital image processing which has numerous applications in the field of analysis of images, augmented reality, machine vision, and many more. The field of medical image analysis is growing and the segmentation of the organs, diseases, or abnormalities in medical images has become demanding. The segmentation of medical images helps in checking the growth of disease like tumour, controlling the dosage of medicine, and dosage of exposure to radiations. Medical image segmentation is really a challenging task due to the various artefacts present in the images. Recently, deep neural models have shown application in various image segmentation tasks. This significant growth is due to the achievements and high performance of the deep learning strategies. This work presents a review of the literature in the field of medical image segmentation employing deep convolutional neural networks. The paper examines the various widely used medical image datasets, the different metrics used for evaluating the segmentation tasks, and performances of different CNN based networks. In comparison to the existing review and survey papers, the present work also discusses the various challenges in the field of segmentation of medical images and different state-of-the-art solutions available in the literature.

50 citations


Journal ArticleDOI
TL;DR: This work presents a comparative performance analysis of transfer learning-based CNN-pretrained VGG-16, ResNet-50, and Inception-v3 models for automatic prediction of tumor cells in the brain and estimates that the pretrained model V GG-16 determines highly adequate results with an increase in the accuracy rate of training and validation.
Abstract: Brain tumor classification is a very important and the most prominent step for assessing life-threatening abnormal tissues and providing an efficient treatment in patient recovery. To identify pathological conditions in the brain, there exist various medical imaging technologies. Magnetic Resonance Imaging (MRI) is extensively used in medical imaging due to its excellent image quality and independence from ionizing radiations. The significance of deep learning, a subset of artificial intelligence in the area of medical diagnosis applications, has macadamized the path in rapid developments for brain tumor detection from MRI to higher prediction rate. For brain tumor analysis and classification, the convolution neural network (CNN) is the most extensive and widely used deep learning algorithm. In this work, we present a comparative performance analysis of transfer learning-based CNN-pretrained VGG-16, ResNet-50, and Inception-v3 models for automatic prediction of tumor cells in the brain. Pretrained models are demonstrated on the MRI brain tumor images dataset consisting of 233 images. Our paper aims to locate brain tumors with the utilization of the VGG-16 pretrained CNN model. The performance of our model will be evaluated on accuracy. As an outcome, we can estimate that the pretrained model VGG-16 determines highly adequate results with an increase in the accuracy rate of training and validation.

47 citations


Journal ArticleDOI
TL;DR: It has been shown that the proposed mRMRe-GA approach enhances classification accuracy while employing less genetic material than previous methods.
Abstract: In the microarray gene expression data, there are a large number of genes that are expressed at varying levels of expression. Given that there are only a few critically significant genes, it is challenging to analyze and categorize datasets that span the whole gene space. In order to aid in the diagnosis of cancer disease and, as a consequence, the suggestion of individualized treatment, the discovery of biomarker genes is essential. Starting with a large pool of candidates, the parallelized minimal redundancy and maximum relevance ensemble (mRMRe) is used to choose the top m informative genes from a huge pool of candidates. A Genetic Algorithm (GA) is used to heuristically compute the ideal set of genes by applying the Mahalanobis Distance (MD) as a distance metric. Once the genes have been identified, they are input into the GA. It is used as a classifier to four microarray datasets using the approved approach (mRMRe-GA), with the Support Vector Machine (SVM) serving as the classification basis. Leave-One-Out-Cross-Validation (LOOCV) is a cross-validation technique for assessing the performance of a classifier. It is now being investigated if the proposed mRMRe-GA strategy can be compared to other approaches. It has been shown that the proposed mRMRe-GA approach enhances classification accuracy while employing less genetic material than previous methods. Microarray, Gene Expression Data, GA, Feature Selection, SVM, and Cancer Classification are some of the terms used in this paper.

37 citations


Journal ArticleDOI
TL;DR: A review of the different areas of the recent machine learning research for healthcare wearable devices is presented, and different challenges facing machine learning applications on wearable devices are discussed.
Abstract: Using artificial intelligence and machine learning techniques in healthcare applications has been actively researched over the last few years. It holds promising opportunities as it is used to track human activities and vital signs using wearable devices and assist in diseases' diagnosis, and it can play a great role in elderly care and patient's health monitoring and diagnostics. With the great technological advances in medical sensors and miniaturization of electronic chips in the recent five years, more applications are being researched and developed for wearable devices. Despite the remarkable growth of using smart watches and other wearable devices, a few of these massive research efforts for machine learning applications have found their way to market. In this study, a review of the different areas of the recent machine learning research for healthcare wearable devices is presented. Different challenges facing machine learning applications on wearable devices are discussed. Potential solutions from the literature are presented, and areas open for improvement and further research are highlighted.

37 citations


Journal ArticleDOI
TL;DR: The pathophysiology and major revolutions of insulin resistance and excessive levels of androgen in females with PCOS are focused on.
Abstract: The polycystic ovary syndrome (PCOS) is the disease featured by elevated levels of androgens, ovulatory dysfunction, and morphological abnormalities. At reproductive stage of women, the rate of PCOS occurrence is measured as 6–10% and the prevalence rate may be double. There are different pathophysiological factors involved in PCOS, and they play a major role in various abnormalities in individual patient. It is clear that there is noteworthy elevation of androgen in PCOS, causing substantial misery and infertility problems. The overexposure of androgen is directly linked with insulin resistance and hyperinsulinaemia. It has been reported previously that PCOS is related to cardiac metabolic miseries and potently increases the risk of heart diseases. Endometrial cancer is also a serious concern which is reported with exceedingly high incidence in women with PCOS. However, the overexposure of androgen has direct and specific influence on the development of insulin resistance. Although many factors are involved, resistance to the insulin and enhanced level of androgen are considered the major causes of PCOS. In the present review, we have focused on the pathophysiology and major revolutions of insulin resistance and excessive levels of androgen in females with PCOS.

34 citations


Journal ArticleDOI
TL;DR: A reliable approach for diagnosing skin cancer utilizing dermoscopy images in order to improve health care professionals' visual perception and diagnostic abilities to discriminate benign from malignant lesions is presented.
Abstract: Skin cancer is one of the most common diseases that can be initially detected by visual observation and further with the help of dermoscopic analysis and other tests. As at an initial stage, visual observation gives the opportunity of utilizing artificial intelligence to intercept the different skin images, so several skin lesion classification methods using deep learning based on convolution neural network (CNN) and annotated skin photos exhibit improved results. In this respect, the paper presents a reliable approach for diagnosing skin cancer utilizing dermoscopy images in order to improve health care professionals' visual perception and diagnostic abilities to discriminate benign from malignant lesions. The swarm intelligence (SI) algorithms were used for skin lesion region of interest (RoI) segmentation from dermoscopy images, and the speeded-up robust features (SURF) was used for feature extraction of the RoI marked as the best segmentation result obtained using the Grasshopper Optimization Algorithm (GOA). The skin lesions are classified into two groups using CNN against three data sets, namely, ISIC-2017, ISIC-2018, and PH-2 data sets. The proposed segmentation and classification techniques' results are assessed in terms of classification accuracy, sensitivity, specificity, F-measure, precision, MCC, dice coefficient, and Jaccard index, with an average classification accuracy of 98.42 percent, precision of 97.73 percent, and MCC of 0.9704 percent. In every performance measure, our suggested strategy exceeds previous work.

Journal ArticleDOI
TL;DR: XGBoost is used to test alternative decision tree classification algorithms in the hopes of improving the accuracy of heart disease diagnosis, and four types of machine learning (ML) models are compared.
Abstract: At present, a multifaceted clinical disease known as heart failure disease can affect a greater number of people in the world. In the early stages, to evaluate and diagnose the disease of heart failure, cardiac centers and hospitals are heavily based on ECG. The ECG can be considered as a regular tool. Heart disease early detection is a critical concern in healthcare services (HCS). This paper presents the different machine learning technologies based on heart disease detection brief analysis. Firstly, Naïve Bayes with a weighted approach is used for predicting heart disease. The second one, according to the features of frequency domain, time domain, and information theory, is automatic and analyze ischemic heart disease localization/detection. Two classifiers such as support vector machine (SVM) with XGBoost with the best performance are selected for the classification in this method. The third one is the heart failure automatic identification method by using an improved SVM based on the duality optimization scheme also analyzed. Finally, for a clinical decision support system (CDSS), an effective heart disease prediction model (HDPM) is used, which includes density-based spatial clustering of applications with noise (DBSCAN) for outlier detection and elimination, a hybrid synthetic minority over-sampling technique-edited nearest neighbor (SMOTE-ENN) for balancing the training data distribution, and XGBoost for heart disease prediction. Machine learning can be applied in the medical industry for disease diagnosis, detection, and prediction. The major purpose of this paper is to give clinicians a tool to help them diagnose heart problems early on. As a result, it will be easier to treat patients effectively and avoid serious repercussions. This study uses XGBoost to test alternative decision tree classification algorithms in the hopes of improving the accuracy of heart disease diagnosis. In terms of precision, accuracy, f1-measure, and recall as performance parameters above mentioned, four types of machine learning (ML) models are compared.

Journal ArticleDOI
TL;DR: The proposed work applied transfer learning classification models on both fake news and extremist-non-extremist datasets to check the performance of transfer learning models.
Abstract: Text Classification problem has been thoroughly studied in information retrieval problems and data mining tasks. It is beneficial in multiple tasks including medical diagnose health and care department, targeted marketing, entertainment industry, and group filtering processes. A recent innovation in both data mining and natural language processing gained the attention of researchers from all over the world to develop automated systems for text classification. NLP allows categorizing documents containing different texts. A huge amount of data is generated on social media sites through social media users. Three datasets have been used for experimental purposes including the COVID-19 fake news dataset, COVID-19 English tweet dataset, and extremist-non-extremist dataset which contain news blogs, posts, and tweets related to coronavirus and hate speech. Transfer learning approaches do not experiment on COVID-19 fake news and extremist-non-extremist datasets. Therefore, the proposed work applied transfer learning classification models on both these datasets to check the performance of transfer learning models. Models are trained and evaluated on the accuracy, precision, recall, and F1-score. Heat maps are also generated for every model. In the end, future directions are proposed.

Journal ArticleDOI
TL;DR: In this paper , the tumor is located using the combination of k-based clustering processes like k-nearest neighbor and k-means clustering, and the value of k in both methods is determined using the optimization process called the firefly algorithm.
Abstract: The imaging modalities are used to view other organs and analyze different tissues in the body. In such imaging modalities, a new and developing imaging technique is hyperspectral imaging. This multicolour representation of tissues helps us to better understand the issues compared to the previous image models. This research aims to analyze the tumor localization in the brain by performing different operations on hyperspectral images. The tumor is located using the combination of k-based clustering processes like k-nearest neighbour and k-means clustering. The value of k in both methods is determined using the optimization process called the firefly algorithm. The optimization processes reduce the manual calculation for finding K's optimal value to segment the brain regions. The labelling of the areas of the brain is done using the multilayer feedforward neural network. The proposed technique produced better results than the existing methods like hybrid k-means clustering and parallel k-means clustering by having a higher peak signal-to-noise ratio and a lesser mean absolute error value. The proposed model achieved 96.47% accuracy, 96.32% sensitivity, and 98.24% specificity, which are improved compared to other techniques.

Journal ArticleDOI
TL;DR: In this paper , the authors used a multilayer perceptron (MLP) classifier for emotion recognition from speech and achieved an overall accuracy of 81% for classifying eight different emotions by using the proposed model on the RAVDESS dataset.
Abstract: Human-computer interaction (HCI) has seen a paradigm shift from textual or display-based control toward more intuitive control modalities such as voice, gesture, and mimicry. Particularly, speech has a great deal of information, conveying information about the speaker’s inner condition and his/her aim and desire. While word analysis enables the speaker’s request to be understood, other speech features disclose the speaker’s mood, purpose, and motive. As a result, emotion recognition from speech has become critical in current human-computer interaction systems. Moreover, the findings of the several professions involved in emotion recognition are difficult to combine. Many sound analysis methods have been developed in the past. However, it was not possible to provide an emotional analysis of people in a live speech. Today, the development of artificial intelligence and the high performance of deep learning methods bring studies on live data to the fore. This study aims to detect emotions in the human voice using artificial intelligence methods. One of the most important requirements of artificial intelligence works is data. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) open-source dataset was used in the study. The RAVDESS dataset contains more than 2000 data recorded as speeches and songs by 24 actors. Data were collected for eight different moods from the actors. It was aimed at detecting eight different emotion classes, including neutral, calm, happy, sad, angry, fearful, disgusted, and surprised moods. The multilayer perceptron (MLP) classifier, a widely used supervised learning algorithm, was preferred for classification. The proposed model’s performance was compared with that of similar studies, and the results were evaluated. An overall accuracy of 81% was obtained for classifying eight different emotions by using the proposed model on the RAVDESS dataset.

Journal ArticleDOI
TL;DR: The findings revealed that IoT, blockchain, and fog computing had become drivers of efficiency in the healthcare services in smart cities and Blockchain has been presented as a promising technology for ensuring the protection of private data, creating a decentralized database, and improving the interoperability of data.
Abstract: Nowadays, technology has been evolving rapidly. Due to the consequent impact of smart technologies, it becomes a ubiquitous part of life. These technologies have led to the emergence of smart cities that are geographic areas driven by advanced information and communication technologies. In the context of smart cities, IoT, blockchain, and fog computing have been found as the significant drivers of smart initiates. In this recognition, the present study is focused on delineating the impact and potential of blockchain, IoT, and fog computing on healthcare services in the context of smart cities. In pursuit of this objective, the study has conducted a systematic review of literature that is most relevant to the topic of the paper. In order to select the most relevant and credible articles, the researcher has used PRISMA and AMSTAR that have culminated in the 10 most relevant articles for the present study. The findings revealed that IoT, blockchain, and fog computing had become drivers of efficiency in the healthcare services in smart cities. Among the three technologies, IoT has been found to be widely incorporated. However, it is found to be lacking in terms of cost efficiency, data privacy, and interoperability of data. In this recognition, blockchain technology and fog computing have been found to be more relevant to the healthcare sector in smart cities. Blockchain has been presented as a promising technology for ensuring the protection of private data, creating a decentralized database, and improving the interoperability of data while fog computing has been presented as the promising technology for low-cost remote monitoring, reducing latency and increasing efficiency.

Journal ArticleDOI
TL;DR: In this paper , a review of MRI and ultrasound imaging features with respect to different imaging modalities for traditional knee OA diagnosis and updates recent image-based machine learning approaches for knee osteoarthritis diagnosis and prognosis.
Abstract: Knee osteoarthritis (OA) is a deliberating joint disorder characterized by cartilage loss that can be captured by imaging modalities and translated into imaging features. Observing imaging features is a well-known objective assessment for knee OA disorder. However, the variety of imaging features is rarely discussed. This study reviews knee OA imaging features with respect to different imaging modalities for traditional OA diagnosis and updates recent image-based machine learning approaches for knee OA diagnosis and prognosis. Although most studies recognized X-ray as standard imaging option for knee OA diagnosis, the imaging features are limited to bony changes and less sensitive to short-term OA changes. Researchers have recommended the usage of MRI to study the hidden OA-related radiomic features in soft tissues and bony structures. Furthermore, ultrasound imaging features should be explored to make it more feasible for point-of-care diagnosis. Traditional knee OA diagnosis mainly relies on manual interpretation of medical images based on the Kellgren–Lawrence (KL) grading scheme, but this approach is consistently prone to human resource and time constraints and less effective for OA prevention. Recent studies revealed the capability of machine learning approaches in automating knee OA diagnosis and prognosis, through three major tasks: knee joint localization (detection and segmentation), classification of OA severity, and prediction of disease progression. AI-aided diagnostic models improved the quality of knee OA diagnosis significantly in terms of time taken, reproducibility, and accuracy. Prognostic ability was demonstrated by several prediction models in terms of estimating possible OA onset, OA deterioration, progressive pain, progressive structural change, progressive structural change with pain, and time to total knee replacement (TKR) incidence. Despite research gaps, machine learning techniques still manifest huge potential to work on demanding tasks such as early knee OA detection and estimation of future disease events, as well as fundamental tasks such as discovering the new imaging features and establishment of novel OA status measure. Continuous machine learning model enhancement may favour the discovery of new OA treatment in future.

Journal ArticleDOI
TL;DR: A prototype was created to present an application that could help nurses in their clinical processes, storing their experiences in a case base for future research, employing one of the artificial intelligence techniques, case-based reasoning (CBR).
Abstract: Of the most popular applications of artificial intelligence (AI), those used in the health sector are the ones that represent the largest proportion, in terms of use and expectation. An investigative systematization model is proposed in the scientific training of nursing professionals, by articulating epistemological positions from previous studies on the subject. In order to validate the model proposed, a prototype was created to present an application that could help nurses in their clinical processes, storing their experiences in a case base for future research. The prototype consisted of digitizing paediatric nursing diagnoses and inserting them into a case base in order to assess the effectiveness of the prototype in handling these cases in a structure conducive to retrieval, adaptation, indexing, and case comparison. This work presents as a result a computational tool for the health area, employing one of the artificial intelligence techniques, case-based reasoning (CBR). The small governmental nursing education institution in Bangladesh used in this study did not yet have the systemization of nursing care (NCS) and computerized support scales.

Journal ArticleDOI
TL;DR: The prediction results of data analysis on a PD real-world dataset revealed that expectation-maximization with the aid of SVR ensembles can provide better prediction accuracy in relation to decision trees, deep belief network, neurofuzzy, and support vector regression combined with other clustering techniques in the prediction of Motor-U PDRS and Total-UPDRS.
Abstract: Parkinson's disease (PD) is a complex neurodegenerative disease. Accurate diagnosis of this disease in the early stages is crucial for its initial treatment. This paper aims to present a comparative study on the methods developed by machine learning techniques in PD diagnosis. We rely on clustering and prediction learning approaches to perform the comparative study. Specifically, we use different clustering techniques for PD data clustering and support vector regression ensembles to predict Motor-UPDRS and Total-UPDRS. The results are then compared with the other prediction learning approaches, multiple linear regression, neurofuzzy, and support vector regression techniques. The comparative study is performed on a real-world PD dataset. The prediction results of data analysis on a PD real-world dataset revealed that expectation-maximization with the aid of SVR ensembles can provide better prediction accuracy in relation to decision trees, deep belief network, neurofuzzy, and support vector regression combined with other clustering techniques in the prediction of Motor-UPDRS and Total-UPDRS.

Journal ArticleDOI
TL;DR: This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level.
Abstract: The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as “convUnet.” The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.

Journal ArticleDOI
TL;DR: It has been recommended that the different models that have been proposed in several studies must be validated further and implemented in different domains, to validate the effectiveness of these models and to ensure that these models can be implemented in several regions effectively.
Abstract: Revolution in healthcare can be experienced with the advancement of smart sensorial things, Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), Internet of Medical Things (IoMT), and edge analytics with the integration of cloud computing. Connected healthcare is receiving extraordinary contemplation from the industry, government, and the healthcare communities. In this study, several studies published in the last 6 years, from 2016 to 2021, have been selected. The selection process is represented through the Prisma flow chart. It has been identified that these increasing challenges of healthcare can be overcome by the implication of AI, ML, DL, Edge AI, IoMT, 6G, and cloud computing. Still, limited areas have implemented these latest advancements and also experienced improvements in the outcomes. These implications have shown successful results not only in resolving the issues from the perspective of the patient but also from the perspective of healthcare professionals. It has been recommended that the different models that have been proposed in several studies must be validated further and implemented in different domains, to validate the effectiveness of these models and to ensure that these models can be implemented in several regions effectively.

Journal ArticleDOI
TL;DR: This study utilizes a convolutional neural network (CNN), stacked autoencoder, and deep neural network to develop a COVID-19 diagnostic system that has outperformed the current existing state-of-the-art models in detecting the CO VID-19 virus using CT images.
Abstract: Coronavirus disease 2019 (COVID-19) is a novel disease that affects healthcare on a global scale and cannot be ignored because of its high fatality rate. Computed tomography (CT) images are presently being employed to assist doctors in detecting COVID-19 in its early stages. In several scenarios, a combination of epidemiological criteria (contact during the incubation period), the existence of clinical symptoms, laboratory tests (nucleic acid amplification tests), and clinical imaging-based tests are used to diagnose COVID-19. This method can miss patients and cause more complications. Deep learning is one of the techniques that has been proven to be prominent and reliable in several diagnostic domains involving medical imaging. This study utilizes a convolutional neural network (CNN), stacked autoencoder, and deep neural network to develop a COVID-19 diagnostic system. In this system, classification undergoes some modification before applying the three CT image techniques to determine normal and COVID-19 cases. A large-scale and challenging CT image dataset was used in the training process of the employed deep learning model and reporting their final performance. Experimental outcomes show that the highest accuracy rate was achieved using the CNN model with an accuracy of 88.30%, a sensitivity of 87.65%, and a specificity of 87.97%. Furthermore, the proposed system has outperformed the current existing state-of-the-art models in detecting the COVID-19 virus using CT images.

Journal ArticleDOI
TL;DR: The received results with model size suggest that the proposed CNN models especially Deep-NSR could be more useful in wearable devices such as medical vests, bracelets for long-term monitoring of cardiac conditions, and in telemedicine to accurate diagnose the arrhythmia from ECG automatically.
Abstract: Recently, cardiac arrhythmia recognition from electrocardiography (ECG) with deep learning approaches is becoming popular in clinical diagnosis systems due to its good prognosis findings, where expert data preprocessing and feature engineering are not usually required. But a lightweight and effective deep model is highly demanded to face the challenges of deploying the model in real-life applications and diagnosis accurately. In this work, two effective and lightweight deep learning models named Deep-SR and Deep-NSR are proposed to recognize ECG beats, which are based on two-dimensional convolution neural networks (2D CNNs) while using different structural regularizations. First, 97720 ECG beats extracted from all records of a benchmark MIT-BIH arrhythmia dataset have been transformed into 2D RGB (red, green, and blue) images that act as the inputs to the proposed 2D CNN models. Then, the optimization of the proposed models is performed through the proper initialization of model layers, on-the-fly augmentation, regularization techniques, Adam optimizer, and weighted random sampler. Finally, the performance of the proposed models is evaluated by a stratified 5-fold cross-validation strategy along with callback features. The obtained overall accuracy of recognizing normal beat and three arrhythmias (V-ventricular ectopic, S-supraventricular ectopic, and F-fusion) based on the Association for the Advancement of Medical Instrumentation (AAMI) is 99.93%, and 99.96% for the proposed Deep-SR model and Deep-NSR model, which demonstrate that the effectiveness of the proposed models has surpassed the state-of-the-art models and also expresses the higher model generalization. The received results with model size suggest that the proposed CNN models especially Deep-NSR could be more useful in wearable devices such as medical vests, bracelets for long-term monitoring of cardiac conditions, and in telemedicine to accurate diagnose the arrhythmia from ECG automatically. As a result, medical costs of patients and work pressure on physicians in medicals and clinics would be reduced effectively.

Journal ArticleDOI
TL;DR: This work has studied and compared the impact of three data augmentation techniques on the final performances of CNN architectures in the 3D domain for the early diagnosis of AD and found the performance of random zoomed in/out augmentation to be the best among all the augmentation methods.
Abstract: Alzheimer's disease (AD) is an irreversible illness of the brain impacting the functional and daily activities of elderly population worldwide. Neuroimaging sensory systems such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) measure the pathological changes in the brain associated with this disorder especially in its early stages. Deep learning (DL) architectures such as Convolutional Neural Networks (CNNs) are successfully used in recognition, classification, segmentation, detection, and other domains for data interpretation. Data augmentation schemes work alongside DL techniques and may impact the final task performance positively or negatively. In this work, we have studied and compared the impact of three data augmentation techniques on the final performances of CNN architectures in the 3D domain for the early diagnosis of AD. We have studied both binary and multiclass classification problems using MRI and PET neuroimaging modalities. We have found the performance of random zoomed in/out augmentation to be the best among all the augmentation methods. It is also observed that combining different augmentation methods may result in deteriorating performances on the classification tasks. Furthermore, we have seen that architecture engineering has less impact on the final classification performance in comparison to the data manipulation schemes. We have also observed that deeper architectures may not provide performance advantages in comparison to their shallower counterparts. We have further observed that these augmentation schemes do not alleviate the class imbalance issue.

Journal ArticleDOI
TL;DR: This work proposes an optimized machine learning-based model that extracts optimal texture features from TB-related images and selects the hyper-parameters of the classifiers and highlights the efficiency of modified SVM classifier compared with other standard ones.
Abstract: Computer science plays an important role in modern dynamic health systems. Given the collaborative nature of the diagnostic process, computer technology provides important services to healthcare professionals and organizations, as well as to patients and their families, researchers, and decision-makers. Thus, any innovations that improve the diagnostic process while maintaining quality and safety are crucial to the development of the healthcare field. Many diseases can be tentatively diagnosed during their initial stages. In this study, all developed techniques were applied to tuberculosis (TB). Thus, we propose an optimized machine learning-based model that extracts optimal texture features from TB-related images and selects the hyper-parameters of the classifiers. Increasing the accuracy rate and minimizing the number of characteristics extracted are our goals. In other words, this is a multitask optimization issue. A genetic algorithm (GA) is used to choose the best features, which are then fed into a support vector machine (SVM) classifier. Using the ImageCLEF 2020 data set, we conducted experiments using the proposed approach and achieved significantly higher accuracy and better outcomes in comparison with the state-of-the-art works. The obtained experimental results highlight the efficiency of modified SVM classifier compared with other standard ones.

Journal ArticleDOI
TL;DR: In this paper , the authors developed a deep learning architecture called the convolutional neural network (CNN), which they examined in this study to see if it can be implemented and showed that the proposed CNN achieves an average accurate rate of 99.6% on training datasets and 86.3% on testing datasets.
Abstract: Population at risk can benefit greatly from remote health monitoring because it allows for early detection and treatment. Because of recent advances in Internet-of-Things (IoT) paradigms, such monitoring systems are now available everywhere. Due to the essential nature of the patients being monitored, these systems demand a high level of quality in aspects such as availability and accuracy. In health applications, where a lot of data are accessible, deep learning algorithms have the potential to perform well. In this paper, we develop a deep learning architecture called the convolutional neural network (CNN), which we examine in this study to see if it can be implemented. The study uses the IoT system with a centralised cloud server, where it is considered as an ideal input data acquisition module. The study uses cloud computing resources by distributing CNN operations to the servers with outsourced fitness functions to be performed at the edge. The results of the simulation show that the proposed method achieves a higher rate of classifying the input instances from the data acquisition tools than other methods. From the results, it is seen that the proposed CNN achieves an average accurate rate of 99.6% on training datasets and 86.3% on testing datasets.

Journal ArticleDOI
TL;DR: The suggested work proposes a system for automatically detecting liver tumours and lesions in magnetic resonance imaging of the abdomen pictures by using 3D affine invariant and shape parameterization approaches, as well as the results of this study.
Abstract: In experimental analysis and computer-aided design sustain scheme, segmentation of cell liver and hepatic lesions by an automated method is a significant step for studying the biomarkers characteristics in experimental analysis and computer-aided design sustain scheme. Patient to patient, the change in lesion type is dependent on the size, imaging equipment (such as the setting dissimilarity approach), and timing of the lesion, all of which are different. With practical approaches, it is difficult to determine the stages of liver cancer based on the segmentation of lesion patterns. Based on the training accuracy rate, the present algorithm confronts a number of obstacles in some domains. The suggested work proposes a system for automatically detecting liver tumours and lesions in magnetic resonance imaging of the abdomen pictures by using 3D affine invariant and shape parameterization approaches, as well as the results of this study. This point-to-point parameterization addresses the frequent issues associated with concave surfaces by establishing a standard model level for the organ's surface throughout the modelling process. Initially, the geodesic active contour analysis approach is used to separate the liver area from the rest of the body. The proposal is as follows: It is possible to minimise the error rate during the training operations, which are carried out using Cascaded Fully Convolutional Neural Networks (CFCNs) using the input of the segmented tumour area. Liver segmentation may help to reduce the error rate during the training procedures. The stage analysis of the data sets, which are comprised of training and testing pictures, is used to get the findings and validate their validity. The accuracy attained by the Cascaded Fully Convolutional Neural Network (CFCN) for the liver tumour analysis is 94.21 percent, with a calculation time of less than 90 seconds per volume for the liver tumour analysis. The results of the trials show that the total accuracy rate of the training and testing procedure is 93.85 percent in the various volumes of 3DIRCAD datasets tested.

Journal ArticleDOI
TL;DR: This work develops a new MapReduce-based LSDGM model (MR-LSDGM) for the healthcare Industry 4.0 context that includes a feature extractor based on long short-term memory and a classifier based on an ideal extreme learning machine (ELM).
Abstract: Several studies aimed at improving healthcare management have shown that the importance of healthcare has grown in recent years. In the healthcare industry, effective decision-making requires multicriteria group decision-making. Simultaneously, big data analytics could be used to help with disease detection and healthcare delivery. Only a few previous studies on large-scale group decision-making (LSDGM) in the big data-driven healthcare Industry 4.0 have focused on this topic. The goal of this work is to improve healthcare management decision-making by developing a new MapReduce-based LSDGM model (MR-LSDGM) for the healthcare Industry 4.0 context. Clustering decision-makers (DM), modelling DM preferences, and classification are the three stages of the MR-LSDGM technique. Furthermore, the DMs are subdivided using a novel biogeography-based optimization (BBO) technique combined with fuzzy C-means (FCM). The subgroup preferences are then modelled using the two-tuple fuzzy linguistic representation (2TFLR) technique. The final classification method also includes a feature extractor based on long short-term memory (LSTM) and a classifier based on an ideal extreme learning machine (ELM). MapReduce is a data management platform used to handle massive amounts of data. A thorough set of experimental analyses is carried out, and the results are analysed using a variety of metrics.

Journal ArticleDOI
TL;DR: The outcomes demonstrate that the SFL technique considerably improves the performance of the original LeNet-5 although using this algorithm slightly increases the training computation time, and the suggested algorithm presents high accuracy in classification and approximation in its mechanism.
Abstract: One of the leading algorithms and architectures in deep learning is Convolution Neural Network (CNN). It represents a unique method for image processing, object detection, and classification. CNN has shown to be an efficient approach in the machine learning and computer vision fields. CNN is composed of several filters accompanied by nonlinear functions and pooling layers. It enforces limitations on the weights and interconnections of the neural network to create a good structure for processing spatial and temporal distributed data. A CNN can restrain the numbering of free parameters of the network through its weight-sharing property. However, the training of CNNs is a challenging approach. Some optimization techniques have been recently employed to optimize CNN's weight and biases such as Ant Colony Optimization, Genetic, Harmony Search, and Simulated Annealing. This paper employs the well-known nature-inspired algorithm called Shuffled Frog-Leaping Algorithm (SFLA) for training a classical CNN structure (LeNet-5), which has not been experienced before. The training method is investigated by employing four different datasets. To verify the study, the results are compared with some of the most famous evolutionary trainers: Whale Optimization Algorithm (WO), Bacteria Swarm Foraging Optimization (BFSO), and Ant Colony Optimization (ACO). The outcomes demonstrate that the SFL technique considerably improves the performance of the original LeNet-5 although using this algorithm slightly increases the training computation time. The results also demonstrate that the suggested algorithm presents high accuracy in classification and approximation in its mechanism.

Journal ArticleDOI
TL;DR: The primary objective of this research is to detect the glaucoma using the retinal fundus images, which can be useful to determine if the patient was affected by glauca or not, and can be positive or negative based on the outcome detected as infected by glAUcoma or not.
Abstract: Glaucoma is the second most common cause for blindness around the world and the third most common in Europe and the USA. Around 78 million people are presently living with glaucoma (2020). It is expected that 111.8 million people will have glaucoma by the year 2040. 90% of glaucoma is undetected in developing nations. It is essential to develop a glaucoma detection system for early diagnosis. In this research, early prediction of glaucoma using deep learning technique is proposed. In this proposed deep learning model, the ORIGA dataset is used for the evaluation of glaucoma images. The U-Net architecture based on deep learning algorithm is implemented for optic cup segmentation and a pretrained transfer learning model; DenseNet-201 is used for feature extraction along with deep convolution neural network (DCNN). The DCNN approach is used for the classification, where the final results will be representing whether the glaucoma infected or not. The primary objective of this research is to detect the glaucoma using the retinal fundus images, which can be useful to determine if the patient was affected by glaucoma or not. The result of this model can be positive or negative based on the outcome detected as infected by glaucoma or not. The model is evaluated using parameters such as accuracy, precision, recall, specificity, and F-measure. Also, a comparative analysis is conducted for the validation of the model proposed. The output is compared to other current deep learning models used for CNN classification, such as VGG-19, Inception ResNet, ResNet 152v2, and DenseNet-169. The proposed model achieved 98.82% accuracy in training and 96.90% in testing. Overall, the performance of the proposed model is better in all the analysis.