scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Healthcare Engineering in 2020"


Journal ArticleDOI
TL;DR: In this work, various Deep CNN based approaches are explored for detecting the presence of COVID19 from chest CT images and a decision fusion based approach is also proposed, which combines predictions from multiple individual models, to produce a final prediction.
Abstract: Coronavirus Disease (COVID19) is a fast-spreading infectious disease that is currently causing a healthcare crisis around the world. Due to the current limitations of the reverse transcription-polymerase chain reaction (RT-PCR) based tests for detecting COVID19, recently radiology imaging based ideas have been proposed by various works. In this work, various Deep CNN based approaches are explored for detecting the presence of COVID19 from chest CT images. A decision fusion based approach is also proposed, which combines predictions from multiple individual models, to produce a final prediction. Experimental results show that the proposed decision fusion based approach is able to achieve above 86% results across all the performance metrics under consideration, with average AUROC and F1-Score being 0.883 and 0.867, respectively. The experimental observations suggest the potential applicability of such Deep CNN based approach in real diagnostic scenarios, which could be of very high utility in terms of achieving fast testing for COVID19.

116 citations


Journal ArticleDOI
TL;DR: This paper proposes a model based on the AI and big data analytics for m-health, and findings of this paper will guide the development of techniques using the combination of AI and the big data as source for handling m- health data more effectively.
Abstract: Mobile health (m-health) is the term of monitoring the health using mobile phones and patient monitoring devices etc. It has been often deemed as the substantial breakthrough in technology in this modern era. Recently, artificial intelligence (AI) and big data analytics have been applied within the m-health for providing an effective healthcare system. Various types of data such as electronic health records (EHRs), medical images, and complicated text which are diversified, poorly interpreted, and extensively unorganized have been used in the modern medical research. This is an important reason for the cause of various unorganized and unstructured datasets due to emergence of mobile applications along with the healthcare systems. In this paper, a systematic review is carried out on application of AI and the big data analytics to improve the m-health system. Various AI-based algorithms and frameworks of big data with respect to the source of data, techniques used, and the area of application are also discussed. This paper explores the applications of AI and big data analytics for providing insights to the users and enabling them to plan, using the resources especially for the specific challenges in m-health, and proposes a model based on the AI and big data analytics for m-health. Findings of this paper will guide the development of techniques using the combination of AI and the big data as source for handling m-health data more effectively.

74 citations


Journal ArticleDOI
TL;DR: A model for predicting COVID-19 using the SIR and machine learning for smart health care and the well-being of the citizens of KSA is proposed and recommends that authorities should apply a strict long-term containment strategy to reduce the epidemic size successfully.
Abstract: COVID-19 presents an urgent global challenge because of its contagious nature, frequently changing characteristics, and the lack of a vaccine or effective medicines. A model for measuring and preventing the continued spread of COVID-19 is urgently required to provide smart health care services. This requires using advanced intelligent computing such as artificial intelligence, machine learning, deep learning, cognitive computing, cloud computing, fog computing, and edge computing. This paper proposes a model for predicting COVID-19 using the SIR and machine learning for smart health care and the well-being of the citizens of KSA. Knowing the number of susceptible, infected, and recovered cases each day is critical for mathematical modeling to be able to identify the behavioral effects of the pandemic. It forecasts the situation for the upcoming 700 days. The proposed system predicts whether COVID-19 will spread in the population or die out in the long run. Mathematical analysis and simulation results are presented here as a means to forecast the progress of the outbreak and its possible end for three types of scenarios: "no actions," "lockdown," and "new medicines." The effect of interventions like lockdown and new medicines is compared with the "no actions" scenario. The lockdown case delays the peak point by decreasing the infection and affects the area equality rule of the infected curves. On the other side, new medicines have a significant impact on infected curve by decreasing the number of infected people about time. Available forecast data on COVID-19 using simulations predict that the highest level of cases might occur between 15 and 30 November 2020. Simulation data suggest that the virus might be fully under control only after June 2021. The reproductive rate shows that measures such as government lockdowns and isolation of individuals are not enough to stop the pandemic. This study recommends that authorities should, as soon as possible, apply a strict long-term containment strategy to reduce the epidemic size successfully.

73 citations


Journal ArticleDOI
TL;DR: An Internet of Medical Things- (IoMT-) based framework to enhance and provide a quick and safe identification of leukemia and demonstrated that the suggested models supersede the other well-known machine learning algorithms used for healthy-versus-leukemia-subtypes identification.
Abstract: For the last few years, computer-aided diagnosis (CAD) has been increasing rapidly. Numerous machine learning algorithms have been developed to identify different diseases, e.g., leukemia. Leukemia is a white blood cells- (WBC-) related illness affecting the bone marrow and/or blood. A quick, safe, and accurate early-stage diagnosis of leukemia plays a key role in curing and saving patients' lives. Based on developments, leukemia consists of two primary forms, i.e., acute and chronic leukemia. Each form can be subcategorized as myeloid and lymphoid. There are, therefore, four leukemia subtypes. Various approaches have been developed to identify leukemia with respect to its subtypes. However, in terms of effectiveness, learning process, and performance, these methods require improvements. This study provides an Internet of Medical Things- (IoMT-) based framework to enhance and provide a quick and safe identification of leukemia. In the proposed IoMT system, with the help of cloud computing, clinical gadgets are linked to network resources. The system allows real-time coordination for testing, diagnosis, and treatment of leukemia among patients and healthcare professionals, which may save both time and efforts of patients and clinicians. Moreover, the presented framework is also helpful for resolving the problems of patients with critical condition in pandemics such as COVID-19. The methods used for the identification of leukemia subtypes in the suggested framework are Dense Convolutional Neural Network (DenseNet-121) and Residual Convolutional Neural Network (ResNet-34). Two publicly available datasets for leukemia, i.e., ALL-IDB and ASH image bank, are used in this study. The results demonstrated that the suggested models supersede the other well-known machine learning algorithms used for healthy-versus-leukemia-subtypes identification.

71 citations


Journal ArticleDOI
TL;DR: The calculations and the evaluation done in this research have revealed that BCP-SVM is better than B CP-T1F, which concludes out the 96.56 percentage accuracy, whereas the BCP -SVM gives accuracy of 97.06 percentage.
Abstract: The developing countries are still starving for the betterment of health sector. The disease commonly found among the women is breast cancer, and past researches have proven results that if the cancer is detected at a very early stage, the chances to overcome the disease are higher than the disease treated or detected at a later stage. This article proposed cloud-based intelligent BCP-T1F-SVM with 2 variations/models like BCP-T1F and BCP-SVM. The proposed BCP-T1F-SVM system has employed two main soft computing algorithms. The proposed BCP-T1F-SVM expert system specifically defines the stage and the type of cancer a person is suffering from. Expert system will elaborate the grievous stages of the cancer, to which extent a patient has suffered. The proposed BCP-SVM gives the higher precision of the proposed breast cancer detection model. In the limelight of breast cancer, the proposed BCP-T1F-SVM expert system gives out the higher precision rate. The proposed BCP-T1F expert system is being employed in the diagnosis of breast cancer at an initial stage. Taking different stages of cancer into account, breast cancer is being dealt by BCP-T1F expert system. The calculations and the evaluation done in this research have revealed that BCP-SVM is better than BCP-T1F. The BCP-T1F concludes out the 96.56 percentage accuracy, whereas the BCP-SVM gives accuracy of 97.06 percentage. The above unleashed research is wrapped up with the conclusion that BCP-SVM is better than the BCP-T1F. The opinions have been recommended by the medical expertise of Sheikh Zayed Hospital Lahore, Pakistan, and Cavan General Hospital, Lisdaran, Cavan, Ireland.

70 citations


Journal ArticleDOI
TL;DR: This study aims to review recent advancements and developments in CAD systems for breast cancer detection and diagnosis using mammograms and to give an overview of the methods used in its steps starting from preprocessing and enhancement step and ending in classification step.
Abstract: According to the American Cancer Society's forecasts for 2019, there will be about 268,600 new cases in the United States with invasive breast cancer in women, about 62,930 new noninvasive cases, and about 41,760 death cases from breast cancer. As a result, there is a high demand for breast imaging specialists as indicated in a recent report for the Institute of Medicine and National Research Council. One way to meet this demand is through developing Computer-Aided Diagnosis (CAD) systems for breast cancer detection and diagnosis using mammograms. This study aims to review recent advancements and developments in CAD systems for breast cancer detection and diagnosis using mammograms and to give an overview of the methods used in its steps starting from preprocessing and enhancement step and ending in classification step. The current level of performance for the CAD systems is encouraging but not enough to make CAD systems standalone detection and diagnose clinical systems. Unless the performance of CAD systems enhanced dramatically from its current level by enhancing the existing methods, exploiting new promising methods in pattern recognition like data augmentation in deep learning and exploiting the advances in computational power of computers, CAD systems will continue to be a second opinion clinical procedure.

62 citations


Journal ArticleDOI
TL;DR: More research is needed on the topic, as evidenced by the low number of interventions found, and more rigorous methods are recommended, addressing human factors and reporting technology usage in future research.
Abstract: Background This review studies technology-supported interventions to help older adults, living in situations of reduced mobility, overcome loneliness, and social isolation. The focus is on long-distance interactions, investigating the (i) challenges addressed and strategies applied; (ii) technology used in interventions; and (iii) social interactions enabled. Methods We conducted a search on Elsevier's Scopus database for related work published until January 2020, focusing on (i) intervention studies supported mainly by technology-mediated communication, (ii) aiming at supported virtual social interactions between people, and (iii) evaluating the impact of loneliness or social isolation. Results Of the 1178 papers screened, 25 met the inclusion criteria. Computer and Internet training was the dominant strategy, allowing access to communication technologies, while in recent years, we see more studies aiming to provide simple, easy-to-use technology. The technology used was mostly off-the-shelf, with fewer solutions tailored to older adults. Social interactions targeted mainly friends and family, and most interventions focused on more than one group of people. Discussion. All interventions reported positive results, suggesting feasibility. However, more research is needed on the topic (especially randomized controlled trials), as evidenced by the low number of interventions found. We recommend more rigorous methods, addressing human factors and reporting technology usage in future research.

57 citations


Journal ArticleDOI
TL;DR: This paper focuses on connecting the brain with a mobile home robot by translating brain signals to computer commands to build a brain-computer interface that may offer the promise of greatly enhancing the quality of life of disabled and able-bodied people by considerably improving their autonomy, mobility, and abilities.
Abstract: The assistive, adaptive, and rehabilitative applications of EEG-based robot control and navigation are undergoing a major transformation in dimension as well as scope. Under the background of artificial intelligence, medical and nonmedical robots have rapidly developed and have gradually been applied to enhance the quality of people’s lives. We focus on connecting the brain with a mobile home robot by translating brain signals to computer commands to build a brain-computer interface that may offer the promise of greatly enhancing the quality of life of disabled and able-bodied people by considerably improving their autonomy, mobility, and abilities. Several types of robots have been controlled using BCI systems to complete real-time simple and/or complicated tasks with high performances. In this paper, a new EEG-based intelligent teleoperation system was designed for a mobile wall-crawling cleaning robot. This robot uses crawler type instead of the traditional wheel type to be used for window or floor cleaning. For EEG-based system controlling the robot position to climb the wall and complete the tasks of cleaning, we extracted steady state visually evoked potential (SSVEP) from the collected electroencephalography (EEG) signal. The visual stimulation interface in the proposed SSVEP-based BCI was composed of four flicker pieces with different frequencies (e.g., 6 Hz, 7.5 Hz, 8.57 Hz, and 10 Hz). Seven subjects were able to smoothly control the movement directions of the cleaning robot by looking at the corresponding flicker using their brain activity. To solve the multiclass problem, thereby achieving the purpose of cleaning the wall within a short period, the canonical correlation analysis (CCA) classification algorithm had been used. Offline and online experiments were held to analyze/classify EEG signals and use them as real-time commands. The proposed system was efficient in the classification and control phases with an obtained accuracy of 89.92% and had an efficient response speed and timing with a bit rate of 22.23 bits/min. These results suggested that the proposed EEG-based clean robot system is promising for smart home control in terms of completing the tasks of cleaning the walls with efficiency, safety, and robustness.

56 citations


Journal ArticleDOI
TL;DR: A chatbot service was developed for the Covenant University Doctor (CUDoctor) telehealth system based on fuzzy logic rules and fuzzy inference, which provides a personalized diagnosis utilizing self-input from users to effectively diagnose diseases.
Abstract: The use of natural language processing (NLP) methods and their application to developing conversational systems for health diagnosis increases patients’ access to medical knowledge. In this study, a chatbot service was developed for the Covenant University Doctor (CUDoctor) telehealth system based on fuzzy logic rules and fuzzy inference. The service focuses on assessing the symptoms of tropical diseases in Nigeria. Telegram Bot Application Programming Interface (API) was used to create the interconnection between the chatbot and the system, while Twilio API was used for interconnectivity between the system and a short messaging service (SMS) subscriber. The service uses the knowledge base consisting of known facts on diseases and symptoms acquired from medical ontologies. A fuzzy support vector machine (SVM) is used to effectively predict the disease based on the symptoms inputted. The inputs of the users are recognized by NLP and are forwarded to the CUDoctor for decision support. Finally, a notification message displaying the end of the diagnosis process is sent to the user. The result is a medical diagnosis system which provides a personalized diagnosis utilizing self-input from users to effectively diagnose diseases. The usability of the developed system was evaluated using the system usability scale (SUS), yielding a mean SUS score of 80.4, which indicates the overall positive evaluation.

38 citations


Journal ArticleDOI
TL;DR: A robust and efficient method based on transfer learning techniques is proposed to identify normal and COVID-19 patients by employing small training data and is competitive with the available literature, demonstrating that it could be used for the early detection of COVID -19 patients.
Abstract: Due to the rapid spread of COVID-19 and its induced death worldwide, it is imperative to develop a reliable tool for the early detection of this disease. Chest X-ray is currently accepted to be one of the reliable means for such a detection purpose. However, most of the available methods utilize large training data, and there is a need for improvement in the detection accuracy due to the limited boundary segment of the acquired images for symptom identifications. In this study, a robust and efficient method based on transfer learning techniques is proposed to identify normal and COVID-19 patients by employing small training data. Transfer learning builds accurate models in a timesaving way. First, data augmentation was performed to help the network for memorization of image details. Next, five state-of-the-art transfer learning models, AlexNet, MobileNetv2, ShuffleNet, SqueezeNet, and Xception, with three optimizers, Adam, SGDM, and RMSProp, were implemented at various learning rates, 1e-4, 2e-4, 3e-4, and 4e-4, to reduce the probability of overfitting. All the experiments were performed on publicly available datasets with several analytical measurements attained after execution with a 10-fold cross-validation method. The results suggest that MobileNetv2 with Adam optimizer at a learning rate of 3e-4 provides an average accuracy, recall, precision, and F-score of 97%, 96.5%, 97.5%, and 97%, respectively, which are higher than those of all other combinations. The proposed method is competitive with the available literature, demonstrating that it could be used for the early detection of COVID-19 patients.

33 citations


Journal ArticleDOI
TL;DR: An integrated computer-aided system based on deep learning is proposed for the detection of multiple categories of tuberculosis lesions in chest radiographs and is superior to current systems that can be used to assist radiologists in diagnoses and public health providers in screening for tuberculosis in areas where tuberculosis is endemic.
Abstract: The early screening and diagnosis of tuberculosis plays an important role in the control and treatment of tuberculosis infections. In this paper, an integrated computer-aided system based on deep learning is proposed for the detection of multiple categories of tuberculosis lesions in chest radiographs. In this system, the fully convolutional neural network method is used to segment the lung area from the entire chest radiograph for pulmonary tuberculosis detection. Different from the previous analysis of the whole chest radiograph, we focus on the specific tuberculosis lesion areas for the analysis and propose the first multicategory tuberculosis lesion detection method. In it, a learning scalable pyramid structure is introduced into the Faster Region-based Convolutional Network (Faster RCNN), which effectively improves the detection of small-area lesions, mines indistinguishable samples during the training process, and uses reinforcement learning to reduce the detection of false-positive lesions. To compare our method with the current tuberculosis detection system, we propose a classification rule for whole chest X-rays using a multicategory tuberculosis lesion detection model and achieve good performance on two public datasets (Montgomery: AUC = 0.977 and accuracy = 0.926; Shenzhen: AUC = 0.941 and accuracy = 0.902). Our proposed computer-aided system is superior to current systems that can be used to assist radiologists in diagnoses and public health providers in screening for tuberculosis in areas where tuberculosis is endemic.

Journal ArticleDOI
TL;DR: A hesitant fuzzy multicriteria decision making (MCDM) method, hesitant fuzzy Analytic Hierarchy Process (hesitant F-AHP), is implemented to make pairwise comparison of COVID-19 country-level intervention strategies applied by various countries and determine relative importance scores.
Abstract: In this study, a hesitant fuzzy AHP method is presented to help decision makers (DMs), especially policymakers, governors, and physicians, evaluate the importance of intervention strategy alternatives applied by various countries for the COVID-19 pandemic. In this research, a hesitant fuzzy multicriteria decision making (MCDM) method, hesitant fuzzy Analytic Hierarchy Process (hesitant F-AHP), is implemented to make pairwise comparison of COVID-19 country-level intervention strategies applied by various countries and determine relative importance scores. An illustrative study is presented where fifteen intervention strategies applied by various countries in the world during the COVID-19 pandemic are evaluated by seven physicians (a professor of infectious diseases and clinical microbiology, an infectious disease physician, a clinical microbiology physician, two internal medicine physicians, an anesthesiology and reanimation physician, and a family physician) in Turkey who act as DMs in the process.

Journal ArticleDOI
TL;DR: Mixed reality represents a promising technique that will soon enter the operating rooms to support surgeons during surgical procedures in many hospitals across the world, the study clearly highlights.
Abstract: Currently, surgeons in operating rooms are forced to focus their attention both on the patient's body and on flat low-quality surgical monitors, in order to get all the information needed to successfully complete surgeries. The way the data are displayed leads to disturbances of the surgeon's visuals, which may affect his performances, besides the fact that other members of the surgical team do not have proper visual tools able to aid him. The idea underlying this paper is to exploit mixed reality to support surgeons during surgical procedures. In particular, the proposed experimental setup, employed in the operating room, is based on an architecture that put together the Microsoft HoloLens, a Digital Imaging and Communications in Medicine (DICOM) player and a mixed reality visualization tool (i.e., Spectator View) developed by using the Mixed Reality Toolkit in Unity with Windows 10 SDK. The suggested approach enables visual information on the patient's body as well as information on the results of medical screenings to be visualized on the surgeon's headsets. Additionally, the architecture enables any data and details to be shared by the team members or by external users during surgical operations. The paper analyses in detail advantages and drawbacks that the surgeons have found when they wore the Microsoft HoloLens headset during all the ten open abdomen surgeries conducted at the IRCCS Hospital "Giovanni Paolo II" in the city of Bari (Italy). A survey based on Likert scale demonstrates how the use of the suggested tools can increase the execution speed by allowing multitasking procedures, i.e., by checking medical images at high resolution without leaving the operating table and the patient. On the other hand, the survey also reveals an increase in the physical stress and reduced comfort due to the weight of the Microsoft HoloLens device, along with drawbacks due to the battery autonomy. Additionally, the survey seems to encourage the use of DICOM Viewer and Spectator View both for surgical education and for improving surgery outcomes. Note that the real use of the conceived platform in the operating room represents a remarkable feature of this paper, since most if not all the studies conducted so far in literature exploit mixed reality only in simulated environments and not in real operating rooms. In conclusion, the study clearly highlights that, despite the challenges required in the forthcoming years to improve the current technology, mixed reality represents a promising technique that will soon enter the operating rooms to support surgeons during surgical procedures in many hospitals across the world.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed system is successfully employed for the diagnosis of chronic diseases and has enhanced the performance of machine learning algorithms.
Abstract: Chronic diseases represent a serious threat to public health across the world. It is estimated at about 60% of all deaths worldwide and approximately 43% of the global burden of chronic diseases. Thus, the analysis of the healthcare data has helped health officials, patients, and healthcare communities to perform early detection for those diseases. Extracting the patterns from healthcare data has helped the healthcare communities to obtain complete medical data for the purpose of diagnosis. The objective of the present research work is presented to improve the surveillance detection system for chronic diseases, which is used for the protection of people’s lives. For this purpose, the proposed system has been developed to enhance the detection of chronic disease by using machine learning algorithms. The standard data related to chronic diseases have been collected from various worldwide resources. In healthcare data, special chronic diseases include ambiguous objects of the class. Therefore, the presence of ambiguous objects shows the availability of traits involving two or more classes, which reduces the accuracy of the machine learning algorithms. The novelty of the current research work lies in the assumption that demonstrates the noncrisp Rough K-means (RKM) clustering for figuring out the ambiguity in chronic disease dataset to improve the performance of the system. The RKM algorithm has clustered data into two sets, namely, the upper approximation and lower approximation. The objects belonging to the upper approximation are favourable objects, whereas the ones belonging to the lower approximation are excluded and identified as ambiguous. These ambiguous objects have been excluded to improve the machine learning algorithms. The machine learning algorithms, namely, naive Bayes (NB), support vector machine (SVM), K-nearest neighbors (KNN), and random forest tree, are presented and compared. The chronic disease data are obtained from the machine learning repository and Kaggle to test and evaluate the proposed model. The experimental results demonstrate that the proposed system is successfully employed for the diagnosis of chronic diseases. The proposed model achieved the best results with naive Bayes with RKM for the classification of diabetic disease (80.55%), whereas SVM with RKM for the classification of kidney disease achieved 100% and SVM with RKM for the classification of cancer disease achieved 97.53 with respect to accuracy metric. The performance measures, such as accuracy, sensitivity, specificity, precision, and F-score, are employed to evaluate the performance of the proposed system. Furthermore, evaluation and comparison of the proposed system with the existing machine learning algorithms are presented. Finally, the proposed system has enhanced the performance of machine learning algorithms.

Journal ArticleDOI
TL;DR: Experimental evaluations show that the proposed approach could yield objective and quantitative FP diagnosis results, which agree with those obtained by an experienced clinician, and outperforms the state-of-the-art systems.
Abstract: Facial paralysis (FP) is a loss of facial movement due to nerve damage. Most existing diagnosis systems of FP are subjective, e.g., the House-Brackmann (HB) grading system, which highly depends on the skilled clinicians and lacks an automatic quantitative assessment. In this paper, we propose an efficient yet objective facial paralysis assessment approach via automatic computational image analysis. First, the facial blood flow of FP patients is measured by the technique of laser speckle contrast imaging to generate both RGB color images and blood flow images. Second, with an improved segmentation approach, the patient's face is divided into concerned regions to extract facial blood flow distribution characteristics. Finally, three HB score classifiers are employed to quantify the severity of FP patients. The proposed method has been validated on 80 FP patients, and quantitative results demonstrate that our method, achieving an accuracy of 97.14%, outperforms the state-of-the-art systems. Experimental evaluations also show that the proposed approach could yield objective and quantitative FP diagnosis results, which agree with those obtained by an experienced clinician.

Journal ArticleDOI
Liu Hao1, Yue Keqiang1, Cheng Siyi1, Pan Chengming1, Sun Jie1, Li Wenjun1 
TL;DR: An improved loss function and three hybrid model structures Hybrid-a, Hybrid-f, and Hybrid-c were proposed to improve the performance of DR classification models and can improve DR classification performance.
Abstract: Diabetic retinopathy (DR) is one of the most common complications of diabetes and the main cause of blindness. The progression of the disease can be prevented by early diagnosis of DR. Due to differences in the distribution of medical conditions and low labor efficiency, the best time for diagnosis and treatment was missed, which results in impaired vision. Using neural network models to classify and diagnose DR can improve efficiency and reduce costs. In this work, an improved loss function and three hybrid model structures Hybrid-a, Hybrid-f, and Hybrid-c were proposed to improve the performance of DR classification models. EfficientNetB4, EfficientNetB5, NASNetLarge, Xception, and InceptionResNetV2 CNNs were chosen as the basic models. These basic models were trained using enhance cross-entropy loss and cross-entropy loss, respectively. The output of the basic models was used to train the hybrid model structures. Experiments showed that enhance cross-entropy loss can effectively accelerate the training process of the basic models and improve the performance of the models under various evaluation metrics. The proposed hybrid model structures can also improve DR classification performance. Compared with the best-performing results in the basic models, the accuracy of DR classification was improved from 85.44% to 86.34%, the sensitivity was improved from 98.48% to 98.77%, the specificity was improved from 71.82% to 74.76%, the precision was improved from 90.27% to 91.37%, and the F1 score was improved from 93.62% to 93.9% by using hybrid model structures.

Journal ArticleDOI
TL;DR: These results show the feasibility of using hrf for effective removal of noises from fNIRS data and the best optimal filter for a specific cortical task owing to a specific cortex region.
Abstract: Functional near-infrared spectroscopy (fNIRS) is one of the latest noninvasive brain function measuring technique that has been used for the purpose of brain-computer interfacing (BCI). In this paper, we compare and analyze the effect of six most commonly used filtering techniques (i.e., Gaussian, Butterworth, Kalman, hemodynamic response filter (hrf), Wiener, and finite impulse response) on classification accuracies of fNIRS-BCI. To conclude with the best optimal filter for a specific cortical task owing to a specific cortical region, we divided our experimental tasks according to the three main cortical regions: prefrontal, motor, and visual cortex. Three different experiments were performed for prefrontal and motor execution tasks while one for visual stimuli. The tasks performed for prefrontal include rest (R) vs mental arithmetic (MA), R vs object rotation (OB), and OB vs MA. Similarly, for motor execution, R vs left finger tapping (LFT), R vs right finger tapping (RFT), and LFT vs RFT. Likewise, for the visual cortex, R vs visual stimuli (VS) task. These experiments were performed for ten trials with five subjects. For consistency among extracted data, six statistical features were evaluated using oxygenated hemoglobin, namely, slope, mean, peak, kurtosis, skewness, and variance. Combination of these six features was used to classify data by the nonlinear support vector machine (SVM). The classification accuracies obtained from SVM by using hrf and Gaussian were significantly higher for R vs MA, R vs OB, R vs RFT, and R vs VS and Wiener filter for OB vs MA. Similarly, for R vs LFT and LFT vs RFT, hrf was found to be significant . These results show the feasibility of using hrf for effective removal of noises from fNIRS data.

Journal ArticleDOI
TL;DR: A protocol of semi-immersive video-game based therapy, combined with conventional therapy, may be effective for improving balance, functionality, quality of life, and motivation in patients with subacute stroke.
Abstract: Purpose To determine the effects of a structured protocol using commercial video games on balance, postural control, functionality, quality of life, and level of motivation in patients with subacute stroke. Methods A randomized controlled trial was conducted. A control group (n = 25) received eight weeks of conventional rehabilitation consisting of five weekly sessions based on an approach for task-oriented motor training. The experimental group (n = 25) received eight weeks of conventional rehabilitation consisting of five weekly sessions based on an approach for task-oriented motor training. The experimental group (. Results In the between-group comparison, statistically significant differences were observed in the Modified Rankin scores (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (p < 0.01), the Barthel Index (. Conclusion A protocol of semi-immersive video-game based therapy, combined with conventional therapy, may be effective for improving balance, functionality, quality of life, and motivation in patients with subacute stroke. This trial is registered with NCT03528395.

Journal ArticleDOI
TL;DR: This assessment proposes a two-step methodology for hospital beds vacancy and reallocation during the COVID-19 pandemic and can provide a direction for governments and policymakers to develop strategies based on a robust quantitative production capacity measure.
Abstract: Data envelopment analysis (DEA) is a powerful nonparametric engineering tool for estimating technical efficiency and production capacity of service units. Assuming an equally proportional change in the output/input ratio, we can estimate how many additional medical resource health service units would be required if the number of hospitalizations was expected to increase during an epidemic outbreak. This assessment proposes a two-step methodology for hospital beds vacancy and reallocation during the COVID-19 pandemic. The framework determines the production capacity of hospitals through data envelopment analysis and incorporates the complexity of needs in two categories for the reallocation of beds throughout the medical specialties. As a result, we have a set of inefficient healthcare units presenting less complex bed slacks to be reduced, that is, to be allocated for patients presenting with more severe conditions. The first results in this work, in collaboration with state and municipal administrations in Brazil, report 3772 beds feasible to be evacuated by 64% of the analyzed health units, of which more than 82% are moderate complexity evacuations. The proposed assessment and methodology can provide a direction for governments and policymakers to develop strategies based on a robust quantitative production capacity measure.

Journal ArticleDOI
TL;DR: T1-weighted structural MRI was used for the early classification of AD and the multiatlas label propagation with expectation–maximization-based refinement segmentation method was used, which performed very well for all four types of dataset.
Abstract: Alzheimer’s disease (AD) is one of the most common neurodegenerative illnesses (dementia) among the elderly. Recently, researchers have developed a new method for the instinctive analysis of AD based on machine learning and its subfield, deep learning. Recent state-of-the-art techniques consider multimodal diagnosis, which has been shown to achieve high accuracy compared to a unimodal prognosis. Furthermore, many studies have used structural magnetic resonance imaging (MRI) to measure brain volumes and the volume of subregions, as well as to search for diffuse changes in white/gray matter in the brain. In this study, T1-weighted structural MRI was used for the early classification of AD. MRI results in high-intensity visible features, making preprocessing and segmentation easy. To use this image modality, we acquired four types of datasets from each dataset’s server. In this work, we downloaded 326 subjects from the National Research Center for Dementia homepage, 123 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) homepage, 121 subjects from the Alzheimer’s Disease Repository Without Borders homepage, and 131 subjects from the National Alzheimer’s Coordinating Center homepage. In our experiment, we used the multiatlas label propagation with expectation–maximization-based refinement segmentation method. We segmented the images into 138 anatomical morphometry images (in which 40 features belonged to subcortical volumes and the remaining 98 features belonged to cortical thickness). The entire dataset was split into a 70 : 30 (training and testing) ratio before classifying the data. A principal component analysis was used for dimensionality reduction. Then, the support vector machine radial basis function classifier was used for classification between two groups—AD versus health control (HC) and early mild cognitive impairment (MCI) (EMCI) versus late MCI (LMCI). The proposed method performed very well for all four types of dataset. For instance, for the AD versus HC group, the classifier achieved an area under curve (AUC) of more than 89% for each dataset. For the EMCI versus LMCI group, the classifier achieved an AUC of more than 80% for every dataset. Moreover, we also calculated Cohen kappa and Jaccard index statistical values for all datasets to evaluate the classification reliability. Finally, we compared our results with those of recently published state-of-the-art methods.

Journal ArticleDOI
TL;DR: A method of intelligent diagnosis of pediatric CHD murmurs is developed successfully and can be used for online screening of CHD in children.
Abstract: Heart auscultation is a convenient tool for early diagnosis of heart diseases and is being developed to be an intelligent tool used in online medicine. Currently, there are few studies on intelligent diagnosis of pediatric murmurs due to congenital heart disease (CHD). The purpose of the study was to develop a method of intelligent diagnosis of pediatric CHD murmurs. Phonocardiogram (PCG) signals of 86 children were recorded with 24 children having normal heart sounds and 62 children having CHD murmurs. A segmentation method based on the discrete wavelet transform combined with Hadamard product was implemented to locate the first and the second heart sounds from the PCG signal. Ten features specific to CHD murmurs were extracted as the input of classifier after segmentation. Eighty-six artificial neural network classifiers were composed into a classification system to identify CHD murmurs. The accuracy, sensitivity, and specificity of diagnosis for heart murmurs were 93%, 93.5%, and 91.7%, respectively. In conclusion, a method of intelligent diagnosis of pediatric CHD murmurs is developed successfully and can be used for online screening of CHD in children.

Journal ArticleDOI
TL;DR: A machine learning-based approach to classify the PD patients from the healthy older group (HOG) based on the estimated gait characteristics and a good correlation between the proposed approach, the Tinetti mobility test, and the 3D motion capture system is shown.
Abstract: In the last few years, the importance of measuring gait characteristics has increased tenfold due to their direct relationship with various neurological diseases. As patients suffering from Parkinson's disease (PD) are more prone to a movement disorder, the quantification of gait characteristics helps in personalizing the treatment. The wearable sensors make the measurement process more convenient as well as feasible in a practical environment. However, the question remains to be answered about the validation of the wearable sensor-based measurement system in a real-world scenario. This paper proposes a study that includes an algorithmic approach based on collected data from the wearable accelerometers for the estimation of the gait characteristics and its validation using the Tinetti mobility test and 3D motion capture system. It also proposes a machine learning-based approach to classify the PD patients from the healthy older group (HOG) based on the estimated gait characteristics. The results show a good correlation between the proposed approach, the Tinetti mobility test, and the 3D motion capture system. It was found that decision tree classifiers outperformed other classifiers with a classification accuracy of 88.46%. The obtained results showed enough evidence about the proposed approach that could be suitable for assessing PD in a home-based free-living real-time environment.

Journal ArticleDOI
TL;DR: A method of recognizing OSAHS, which is convenient for patients to monitor themselves in daily life to avoid delayed treatment is presented, and the AHI value of the patient can be obtained by the algorithm system which can determine the severity degree of OSA HS.
Abstract: Obstructive sleep apnea-hypopnea syndrome (OSAHS) is extremely harmful to the human body and may cause neurological dysfunction and endocrine dysfunction, resulting in damage to multiple organs and multiple systems throughout the body and negatively affecting the cardiovascular, kidney, and mental systems. Clinically, doctors usually use standard PSG (Polysomnography) to assist diagnosis. PSG determines whether a person has apnea syndrome with multidimensional data such as brain waves, heart rate, and blood oxygen saturation. In this paper, we have presented a method of recognizing OSAHS, which is convenient for patients to monitor themselves in daily life to avoid delayed treatment. Firstly, we theoretically analyzed the difference between the snoring sounds of normal people and OSAHS patients in the time and frequency domains. Secondly, the snoring sounds related to apnea events and the nonapnea related snoring sounds were classified by deep learning, and then, the severity of OSAHS symptoms had been recognized. In the algorithm proposed in this paper, the snoring data features are extracted through the three feature extraction methods, which are MFCC, LPCC, and LPMFCC. Moreover, we adopted CNN and LSTM for classification. The experimental results show that the MFCC feature extraction method and the LSTM model have the highest accuracy rate which was 87% when it is adopted for binary-classification of snoring data. Moreover, the AHI value of the patient can be obtained by the algorithm system which can determine the severity degree of OSAHS.

Journal ArticleDOI
TL;DR: A multitask dense connection U-Net (MDU-Net) is proposed to address the challenge of bone segmentation from a chest radiograph and a mask encoding mechanism is presented that can force the network to learn the background features.
Abstract: Automatic bone segmentation from a chest radiograph is an important and challenging task in medical image analysis. However, a chest radiograph contains numerous artifacts and tissue shadows, such as trachea, blood vessels, and lung veins, which limit the accuracy of traditional segmentation methods, such as thresholding and contour-related techniques. Deep learning has recently achieved excellent segmentation of some organs, such as the pancreas and the hippocampus. However, the insufficiency of annotated datasets impedes clavicle and rib segmentation from chest X-rays. We have constructed a dataset of chest X-rays with a raw chest radiograph and four annotated images showing the clavicles, anterior ribs, posterior ribs, and all bones (the complete set of ribs and clavicle). On the basis of a sufficient dataset, a multitask dense connection U-Net (MDU-Net) is proposed to address the challenge of bone segmentation from a chest radiograph. We first combine the U-Net multiscale feature fusion method, DenseNet dense connection, and multitasking mechanism to construct the proposed network referred to as MDU-Net. We then present a mask encoding mechanism that can force the network to learn the background features. Transfer learning is ultimately introduced to help the network extract sufficient features. We evaluate the proposed network by fourfold cross validation on 88 chest radiography images. The proposed method achieves the average DSC (Dice similarity coefficient) values of 93.78%, 80.95%, 89.06%, and 88.38% in clavicle segmentation, anterior rib segmentation, posterior rib segmentation, and segmentation of all bones, respectively.

Journal ArticleDOI
TL;DR: Issues of precision agriculture at the output of the network are analyzed using a star and mesh topology with TCP as the transmission protocol and the results showed that the proposed mechanism has good performance and output.
Abstract: A wireless sensor network is a large sensor hub with a confined power supply that performs limited calculations. Due to the degree of restricted correspondence and the large size of the sensor hub, packets sent through the sensor network are based primarily on multihop data transmission. Current wireless sensor networks are widely used in a range of applications, such as precision agriculture, healthcare, and smart cities. The network covers a wide domain and addresses multiple aspects in agriculture, such as soil moisture, temperature, and humidity. Therefore, issues of precision agriculture at the output of the network are analyzed using a star and mesh topology with TCP as the transmission protocol. The system is equipped with two sensors: Arduino DFRobot for soil moisture and DHT11 for relative temperature and humidity. The experiments are performed using the NS2 simulator, which provides an improved interface to analyze the results. The results showed that the proposed mechanism has good performance and output.

Journal ArticleDOI
TL;DR: These experiments show that the proposed Chinese clinical entity recognition model based on deep learning pretraining can effectively improve the recognition performance.
Abstract: Background Clinical named entity recognition is the basic task of mining electronic medical records text, which are with some challenges containing the language features of Chinese electronic medical records text with many compound entities, serious missing sentence components, and unclear entity boundary. Moreover, the corpus of Chinese electronic medical records is difficult to obtain. Methods Aiming at these characteristics of Chinese electronic medical records, this study proposed a Chinese clinical entity recognition model based on deep learning pretraining. The model used word embedding from domain corpus and fine-tuning of entity recognition model pretrained by relevant corpus. Then BiLSTM and Transformer are, respectively, used as feature extractors to identify four types of clinical entities including diseases, symptoms, drugs, and operations from the text of Chinese electronic medical records. Results 75.06% Macro-P, 76.40% Macro-R, and 75.72% Macro-F1 aiming at test dataset could be achieved. These experiments show that the Chinese clinical entity recognition model based on deep learning pretraining can effectively improve the recognition effect. Conclusions These experiments show that the proposed Chinese clinical entity recognition model based on deep learning pretraining can effectively improve the recognition performance.

Journal ArticleDOI
TL;DR: This paper analyzes the big data structure model under cloud computing environment and gives the detailed modified immune evolutionary method to cluster medical data including encoding, constructing fitness function, and selecting genetic operators to overcome the disadvantages of traditional clustering algorithms.
Abstract: Medical data have the characteristics of particularity and complexity. Big data clustering plays a significant role in the area of medicine. The traditional clustering algorithms are easily falling into local extreme value. It will generate clustering deviation, and the clustering effect is poor. Therefore, we propose a new medical big data clustering algorithm based on the modified immune evolutionary method under cloud computing environment to overcome the above disadvantages in this paper. Firstly, we analyze the big data structure model under cloud computing environment. Secondly, we give the detailed modified immune evolutionary method to cluster medical data including encoding, constructing fitness function, and selecting genetic operators. Finally, the experiments show that this new approach can improve the accuracy of data classification, reduce the error rate, and improve the performance of data mining and feature extraction for medical data clustering.

Journal ArticleDOI
TL;DR: This paper presents a machine-learning system capable of extracting individual fatigue descriptors (IFDs) from electromyographic and heart rate variability measurements, and reflects the onset of fatigue by implementing a combination of a dimensionless (0-1) global fatigue descriptor (GFD) and a support vector machine (SVM) classifier.
Abstract: Research in physiology and sports science has shown that fatigue, a complex psychophysiological phenomenon, has a relevant impact in performance and in the correct functioning of our motricity system, potentially being a cause of damage to the human organism. Fatigue can be seen as a subjective or objective phenomenon. Subjective fatigue corresponds to a mental and cognitive event, while fatigue referred as objective is a physical phenomenon. Despite the fact that subjective fatigue is often undervalued, only a physically and mentally healthy athlete is able to achieve top performance in a discipline. Therefore, we argue that physical training programs should address the preventive assessment of both subjective and objective fatigue mechanisms in order to minimize the risk of injuries. In this context, our paper presents a machine-learning system capable of extracting individual fatigue descriptors (IFDs) from electromyographic (EMG) and heart rate variability (HRV) measurements. Our novel approach, using two types of biosignals so that a global (mental and physical) fatigue assessment is taken into account, reflects the onset of fatigue by implementing a combination of a dimensionless (0-1) global fatigue descriptor (GFD) and a support vector machine (SVM) classifier. The system, based on 9 main combined features, achieves fatigue regime classification performances of , ensuring a successful preventive assessment when dangerous fatigue levels are reached. Training data were acquired in a constant work rate test (executed by 14 subjects using a cycloergometry device), where the variable under study (fatigue) gradually increased until the volunteer reached an objective exhaustion state.

Journal ArticleDOI
TL;DR: A novel pipeline that can attain state-of-the-art recognition accuracies on a recent-and-standard dataset—the Human Gait Database (HuGaDB) is proposed and underlines the potential of incorporating EMG signals especially when fusion and selection are done simultaneously.
Abstract: This research addresses the challenge of recognizing human daily activities using surface electromyography (sEMG) and wearable inertial sensors. Effective and efficient recognition in this context has emerged as a cornerstone in robust remote health monitoring systems, among other applications. We propose a novel pipeline that can attain state-of-the-art recognition accuracies on a recent-and-standard dataset-the Human Gait Database (HuGaDB). Using wearable gyroscopes, accelerometers, and electromyography sensors placed on the thigh, shin, and foot, we developed an approach that jointly performs sensor fusion and feature selection. Being done jointly, the proposed pipeline empowers the learned model to benefit from the interaction of features that might have been dropped otherwise. Using statistical and time-based features from heterogeneous signals of the aforementioned sensor types, our approach attains a mean accuracy of 99.8%, which is the highest accuracy on HuGaDB in the literature. This research underlines the potential of incorporating EMG signals especially when fusion and selection are done simultaneously. Meanwhile, it is valid even with simple off-the-shelf feature selection methods such the Sequential Feature Selection family of algorithms. Moreover, through extensive simulations, we show that the left thigh is a key placement for attaining high accuracies. With one inertial sensor on that single placement alone, we were able to achieve a mean accuracy of 98.4%. The presented in-depth comparative analysis shows the influence that every sensor type, position, and placement can have on the attained recognition accuracies-a tool that can facilitate the development of robust systems, customized to specific scenarios and real-life applications.

Journal ArticleDOI
TL;DR: A Gradient Boosting Decision Tree (GB DT) classifier-based fall detection algorithm (GBDT-FD in short) with comprehensive data fusion of posture sensor and human video skeleton is proposed to improve detection accuracy.
Abstract: Since fall is happening with increasing frequency, it has been a major public health problem in an aging society There are considerable demands to distinguish fall down events of seniors with the characteristics of accurate detection and real-time alarm However, some daily activities are erroneously signaled as falls and there are too many false alarms in actual application In order to resolve this problem, this paper designs and implements a comprehensive fall detection framework on the basis of inertial posture sensors and surveillance cameras In the proposed system framework, data sources representing behavior characteristics to indicate potential fall are derived from wearable triaxial accelerometers and monitoring videos of surveillance cameras Moreover, the NB-IoT based communication mode is adopted to transmit wearable sensory data to the Internet for subsequent analysis Furthermore, a Gradient Boosting Decision Tree (GBDT) classifier-based fall detection algorithm (GBDT-FD in short) with comprehensive data fusion of posture sensor and human video skeleton is proposed to improve detection accuracy Experimental results verify the good performance of the proposed GBDT-FD algorithm compared to six kinds of existing fall detection algorithms, including SVM-based fall detection, NN-based fall detection, etc Finally, we implement the proposed integrated systems including wearable posture sensors and monitoring software on the Cloud Server