scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Medical Systems in 2017"


Journal ArticleDOI
TL;DR: The aim of this review is to investigate barriers and challenges of wearable patient monitoring (WPM) solutions adopted by clinicians in acute, as well as in community, care settings and to consider recent studies published between 2015 and 2017.
Abstract: The aim of this review is to investigate barriers and challenges of wearable patient monitoring (WPM) solutions adopted by clinicians in acute, as well as in community, care settings. Currently, healthcare providers are coping with ever-growing healthcare challenges including an ageing population, chronic diseases, the cost of hospitalization, and the risk of medical errors. WPM systems are a potential solution for addressing some of these challenges by enabling advanced sensors, wearable technology, and secure and effective communication platforms between the clinicians and patients. A total of 791 articles were screened and 20 were selected for this review. The most common publication venue was conference proceedings (13, 54%). This review only considered recent studies published between 2015 and 2017. The identified studies involved chronic conditions (6, 30%), rehabilitation (7, 35%), cardiovascular diseases (4, 20%), falls (2, 10%) and mental health (1, 5%). Most studies focussed on the system aspects of WPM solutions including advanced sensors, wireless data collection, communication platform and clinical usability based on a specific area or disease. The current studies are progressing with localized sensor-software integration to solve a specific use-case/health area using non-scalable and `silo' solutions. There is further work required regarding interoperability and clinical acceptance challenges. The advancement of wearable technology and possibilities of using machine learning and artificial intelligence in healthcare is a concept that has been investigated by many studies. We believe future patient monitoring and medical treatments will build upon efficient and affordable solutions of wearable technology.

201 citations


Journal ArticleDOI
TL;DR: The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate in the diagnosis of Chronic Kidney Disease compared to other selected methods.
Abstract: As Chronic Kidney Disease progresses slowly, early detection and effective treatment are the only cure to reduce the mortality rate. Machine learning techniques are gaining significance in medical diagnosis because of their classification ability with high accuracy rates. The accuracy of classification algorithms depend on the use of correct feature selection algorithms to reduce the dimension of datasets. In this study, Support Vector Machine classification algorithm was used to diagnose Chronic Kidney Disease. To diagnose the Chronic Kidney Disease, two essential types of feature selection methods namely, wrapper and filter approaches were chosen to reduce the dimension of Chronic Kidney Disease dataset. In wrapper approach, classifier subset evaluator with greedy stepwise search engine and wrapper subset evaluator with the Best First search engine were used. In filter approach, correlation feature selection subset evaluator with greedy stepwise search engine and filtered subset evaluator with the Best First search engine were used. The results showed that the Support Vector Machine classifier by using filtered subset evaluator with the Best First search engine feature selection method has higher accuracy rate (98.5%) in the diagnosis of Chronic Kidney Disease compared to other selected methods.

156 citations


Journal ArticleDOI
TL;DR: This review seeks to analyze and discuss prominent security techniques for healthcare organizations seeking to adopt a secure electronic health records system using PubMed, CINAHL, and ProQuest Nursing and Allied Health Source as sources.
Abstract: The privacy of patients and the security of their information is the most imperative barrier to entry when considering the adoption of electronic health records in the healthcare industry. Considering current legal regulations, this review seeks to analyze and discuss prominent security techniques for healthcare organizations seeking to adopt a secure electronic health records system. Additionally, the researchers sought to establish a foundation for further research for security in the healthcare industry. The researchers utilized the Texas State University Library to gain access to three online databases: PubMed (MEDLINE), CINAHL, and ProQuest Nursing and Allied Health Source. These sources were used to conduct searches on literature concerning security of electronic health records containing several inclusion and exclusion criteria. Researchers collected and analyzed 25 journals and reviews discussing security of electronic health records, 20 of which mentioned specific security methods and techniques. The most frequently mentioned security measures and techniques are categorized into three themes: administrative, physical, and technical safeguards. The sensitive nature of the information contained within electronic health records has prompted the need for advanced security techniques that are able to put these worries at ease. It is imperative for security techniques to cover the vast threats that are present across the three pillars of healthcare.

127 citations


Journal ArticleDOI
TL;DR: A review of existing predictive models in medicine and health care can be found in this article, where the authors reveal that the predictions of these models differ even when the same dataset is used. And the most famous machine learning methods have explained, and the confusion between statistical approach and machine learning has clarified.
Abstract: Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

126 citations


Journal ArticleDOI
TL;DR: iAQ, a low-cost indoor air quality monitoring wireless sensor network system, developed using Arduino, XBee modules and micro sensors, for storage and availability of monitoring data on a web portal in real time reveals that the system can provide an effective indoor air air quality assessment to prevent exposure risk.
Abstract: Indoor environments are characterized by several pollutant sources. Because people spend more than 90% of their time in indoor environments, several studies have pointed out the impact of indoor air quality on the etiopathogenesis of a wide number of non-specific symptoms which characterizes the "Sick Building Syndrome", involving the skin, the upper and lower respiratory tract, the eyes and the nervous system, as well as many building related diseases. Thus, indoor air quality (IAQ) is recognized as an important factor to be controlled for the occupants' health and comfort. The majority of the monitoring systems presently available is very expensive and only allow to collect random samples. This work describes the system (iAQ), a low-cost indoor air quality monitoring wireless sensor network system, developed using Arduino, XBee modules and micro sensors, for storage and availability of monitoring data on a web portal in real time. Five micro sensors of environmental parameters (air temperature, humidity, carbon monoxide, carbon dioxide and luminosity) were used. Other sensors can be added for monitoring specific pollutants. The results reveal that the system can provide an effective indoor air quality assessment to prevent exposure risk. In fact, the indoor air quality may be extremely different compared to what is expected for a quality living environment. Systems like this would have benefit as public health interventions to reduce the burden of symptoms and diseases related to "sick buildings".

126 citations


Journal ArticleDOI
TL;DR: A critical need to reexamine how health information systems are maintained is highlighted, including a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized.
Abstract: On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.

122 citations


Journal ArticleDOI
TL;DR: A systematic review of the most recent advancements in retinal vessel segmentation methods published in last five years is carried out and provides an insight into active problems and possible future directions towards building successful computer-aided diagnostic system.
Abstract: Retinal vessel segmentation is a key step towards the accurate visualization, diagnosis, early treatment and surgery planning of ocular diseases. For the last two decades, a tremendous amount of research has been dedicated in developing automated methods for segmentation of blood vessels from retinal fundus images. Despite the fact, segmentation of retinal vessels still remains a challenging task due to the presence of abnormalities, varying size and shape of the vessels, non-uniform illumination and anatomical variability between subjects. In this paper, we carry out a systematic review of the most recent advancements in retinal vessel segmentation methods published in last five years. The objectives of this study are as follows: first, we discuss the most crucial preprocessing steps that are involved in accurate segmentation of vessels. Second, we review most recent state-of-the-art retinal vessel segmentation techniques which are classified into different categories based on their main principle. Third, we quantitatively analyse these methods in terms of its sensitivity, specificity, accuracy, area under the curve and discuss newly introduced performance metrics in current literature. Fourth, we discuss the advantages and limitations of the existing segmentation techniques. Finally, we provide an insight into active problems and possible future directions towards building successful computer-aided diagnostic system.

110 citations


Journal ArticleDOI
TL;DR: The aim of this study is to illustrate the teaching potential of applying Virtual Reality in the field of human anatomy, where it can be used as a tool for education in medicine.
Abstract: Virtual Reality is becoming widespread in our society within very different areas, from industry to entertainment. It has many advantages in education as well, since it allows visualizing almost any object or going anywhere in a unique way. We will be focusing on medical education, and more specifically anatomy, where its use is especially interesting because it allows studying any structure of the human body by placing the user inside each one. By allowing virtual immersion in a body structure such as the interior of the cranium, stereoscopic vision goggles make these innovative teaching technologies a powerful tool for training in all areas of health sciences. The aim of this study is to illustrate the teaching potential of applying Virtual Reality in the field of human anatomy, where it can be used as a tool for education in medicine. A Virtual Reality Software was developed as an educational tool. This technological procedure is based entirely on software which will run in stereoscopic goggles to give users the sensation of being in a virtual environment, clearly showing the different bones and foramina which make up the cranium, and accompanied by audio explanations. Throughout the results the structure of the cranium is described in detailed from both inside and out. Importance of an exhaustive morphological knowledge of cranial fossae is further discussed. Application for the design of microsurgery is also commented.

84 citations


Journal ArticleDOI
TL;DR: This study proposes a reliable and fast Extreme Learning Machine (ELM)-based tissue characterization system (a class of Symtosis) for risk stratification of ultrasound liver images using ELM to train single layer feed forward neural network (SLFFNN).
Abstract: Fatty Liver Disease (FLD) is caused by the deposition of fat in liver cells and leads to deadly diseases such as liver cancer. Several FLD detection and characterization systems using machine learning (ML) based on Support Vector Machines (SVM) have been applied. These ML systems utilize large number of ultrasonic grayscale features, pooling strategy for selecting the best features and several combinations of training/testing. As result, they are computationally intensive, slow and do not guarantee high performance due to mismatch between grayscale features and classifier type. This study proposes a reliable and fast Extreme Learning Machine (ELM)-based tissue characterization system (a class of Symtosis) for risk stratification of ultrasound liver images. ELM is used to train single layer feed forward neural network (SLFFNN). The input-to-hidden layer weights are randomly generated reducing computational cost. The only weights to be trained are hidden-to-output layer which is done in a single pass (without any iteration) making ELM faster than conventional ML methods. Adapting four types of K-fold cross-validation (K = 2, 3, 5 and 10) protocols on three kinds of data sizes: S0-original, S4-four splits, S8-sixty four splits (a total of 12 cases) and 46 types of grayscale features, we stratify the FLD US images using ELM and benchmark against SVM. Using the US liver database of 63 patients (27 normal/36 abnormal), our results demonstrate superior performance of ELM compared to SVM, for all cross-validation protocols (K2, K3, K5 and K10) and all types of US data sets (S0, S4, and S8) in terms of sensitivity, specificity, accuracy and area under the curve (AUC). Using the K10 cross-validation protocol on S8 data set, ELM showed an accuracy of 96.75% compared to 89.01% for SVM, and correspondingly, the AUC: 0.97 and 0.91, respectively. Further experiments also showed the mean reliability of 99% for ELM classifier, along with the mean speed improvement of 40% using ELM against SVM. We validated the symtosis system using two class biometric facial public data demonstrating an accuracy of 100%.

84 citations


Journal ArticleDOI
TL;DR: The design, development and applications of a Belief Rule Based Expert System (BRBES) with the ability to handle various types of uncertainties to diagnose TB are presented.
Abstract: The primary diagnosis of Tuberculosis (TB) is usually carried out by looking at the various signs and symptoms of a patient. However, these signs and symptoms cannot be measured with 100 % certainty since they are associated with various types of uncertainties such as vagueness, imprecision, randomness, ignorance and incompleteness. Consequently, traditional primary diagnosis, based on these signs and symptoms, which is carried out by the physicians, cannot deliver reliable results. Therefore, this article presents the design, development and applications of a Belief Rule Based Expert System (BRBES) with the ability to handle various types of uncertainties to diagnose TB. The knowledge base of this system is constructed by taking experts' suggestions and by analyzing historical data of TB patients. The experiments, carried out, by taking the data of 100 patients demonstrate that the BRBES's generated results are more reliable than that of human expert as well as fuzzy rule based expert system.

78 citations


Journal ArticleDOI
TL;DR: A light weight authentication protocol for TMIS that ensures resilience of all possible security attacks and the performance of the protocol is relatively standard in comparison with the related previous research.
Abstract: Telecare Medical Information System (TMIS) supports a standard platform to the patient for getting necessary medical treatment from the doctor(s) via Internet communication. Security protection is important for medical records (data) of the patients because of very sensitive information. Besides, patient anonymity is another most important property, which must be protected. Most recently, Chiou et al. suggested an authentication protocol for TMIS by utilizing the concept of cloud environment. They claimed that their protocol is patient anonymous and well security protected. We reviewed their protocol and found that it is completely insecure against patient anonymity. Further, the same protocol is not protected against mobile device stolen attack. In order to improve security level and complexity, we design a light weight authentication protocol for the same environment. Our security analysis ensures resilience of all possible security attacks. The performance of our protocol is relatively standard in comparison with the related previous research.

Journal ArticleDOI
TL;DR: This paper presents a system that bridges this gap by enabling patients to follow therapy at home by employing an ontology-based question module for recollecting traumatic memories to further elicit a detailed memory recollection.
Abstract: Although post-traumatic stress disorder (PTSD) is well treatable, many people do not get the desired treatment due to barriers to care (such as stigma and cost). This paper presents a system that bridges this gap by enabling patients to follow therapy at home. A therapist is only involved remotely, to monitor progress and serve as a safety net. With this system, patients can recollect their memories in a digital diary and recreate them in a 3D WorldBuilder. Throughout the therapy, a virtual agent is present to inform and guide patients through the sessions, employing an ontology-based question module for recollecting traumatic memories to further elicit a detailed memory recollection. In a usability study with former PTSD patients (n = 4), these questions were found useful for memory recollection. Moreover, the usability of the whole system was rated positively. This system has the potential to be a valuable addition to the spectrum of PTSD treatments, offering a novel type of home therapy assisted by a virtual agent.

Journal ArticleDOI
TL;DR: It can be concluded that the proposed approach can be applied for recognition of focal EEG signals to localize epileptogenic zones and found to be better than the state-of-the-art approaches.
Abstract: Identifying epileptogenic zones prior to surgery is an essential and crucial step in treating patients having pharmacoresistant focal epilepsy. Electroencephalogram (EEG) is a significant measurement benchmark to assess patients suffering from epilepsy. This paper investigates the application of multi-features derived from different domains to recognize the focal and non focal epileptic seizures obtained from pharmacoresistant focal epilepsy patients from Bern Barcelona database. From the dataset, five different classification tasks were formed. Total 26 features were extracted from focal and non focal EEG. Significant features were selected using Wilcoxon rank sum test by setting p-value (p z > 1.96) at 95% significance interval. Hypothesis was made that the effect of removing outliers improves the classification accuracy. Turkey's range test was adopted for pruning outliers from feature set. Finally, 21 features were classified using optimized support vector machine (SVM) classifier with 10-fold cross validation. Bayesian optimization technique was adopted to minimize the cross-validation loss. From the simulation results, it was inferred that the highest sensitivity, specificity, and classification accuracy of 94.56%, 89.74%, and 92.15% achieved respectively and found to be better than the state-of-the-art approaches. Further, it was observed that the classification accuracy improved from 80.2% with outliers to 92.15% without outliers. The classifier performance metrics ensures the suitability of the proposed multi-features with optimized SVM classifier. It can be concluded that the proposed approach can be applied for recognition of focal EEG signals to localize epileptogenic zones.

Journal ArticleDOI
TL;DR: An enhanced genetic programming algorithm is proposed that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.
Abstract: Predicting glucose values on the basis of insulin and food intakes is a difficult task that people with diabetes need to do daily. This is necessary as it is important to maintain glucose levels at appropriate values to avoid not only short-term, but also long-term complications of the illness. Artificial intelligence in general and machine learning techniques in particular have already lead to promising results in modeling and predicting glucose concentrations. In this work, several machine learning techniques are used for the modeling and prediction of glucose concentrations using as inputs the values measured by a continuous monitoring glucose system as well as also previous and estimated future carbohydrate intakes and insulin injections. In particular, we use the following four techniques: genetic programming, random forests, k-nearest neighbors, and grammatical evolution. We propose two new enhanced modeling algorithms for glucose prediction, namely (i) a variant of grammatical evolution which uses an optimized grammar, and (ii) a variant of tree-based genetic programming which uses a three-compartment model for carbohydrate and insulin dynamics. The predictors were trained and tested using data of ten patients from a public hospital in Spain. We analyze our experimental results using the Clarke error grid metric and see that 90% of the forecasts are correct (i.e., Clarke error categories A and B), but still even the best methods produce 5 to 10% of serious errors (category D) and approximately 0.5% of very serious errors (category E). We also propose an enhanced genetic programming algorithm that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.

Journal ArticleDOI
TL;DR: Investigation of the effect of combat stress in the psychophysiological response and attention and memory of warfighters in a simulated combat situation suggests that combat stress actives fight-flight system of soldiers.
Abstract: The present research aimed to analyze the effect of combat stress in the psychophysiological response and attention and memory of warfighters in a simulated combat situation. Variables of blood oxygen saturation, heart rate, blood glucose, blood lactate, body temperature, lower body muscular strength manifestation, cortical arousal, autonomic modulation, state anxiety and memory and attention through a postmission questionnaire were analyzed before and after a combat simulation in 20 male professional Spanish Army warfighters. The combat simulation produces a significant increase (p < 0.05) in explosive leg strength, rated perceived exertion, blood glucose, blood lactate, somatic anxiety, heart rate, and low frequency domain of the HRV (LF) and a significant decrease of high frequency domain of the heart rate variability (HF). The percentage of correct response in the postmission questionnaire parameters show that elements more related with a physical integrity threat are the most correctly remembered. There were significant differences in the postmission questionnaire variables when participants were divided by the cortical arousal post: sounds no response, mobile phone correct, mobile phone no response, odours correct. The correlation analysis showed positive correlations: LF post/body temperature post, HF post/correct sound, body temperature post/glucose post, CFFTpre/lactate post, CFFT post/wrong sound, glucose post/AC pre, AC post/wrong fusil, AS post/SC post and SC post/wrong olfactory; and negative correlations: LF post/correct sound, body temperature post/lactate post and glucose post/lactate post. This data suggest that combat stress actives fight-flight system of soldiers. As conclusion, Combat stress produces an increased psychophysiological response that cause a selective decrease of memory, depending on the nature, dangerous or harmless of the objects.

Journal ArticleDOI
TL;DR: Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT) and can achieve better classification accuracy (CA) than single classification models.
Abstract: The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).

Journal ArticleDOI
TL;DR: A novel method for fuzzy medical image retrieval (FMIR) using vector quantization (VQ) with fuzzy signatures in conjunction with fuzzy S-trees is presented to help to determine appropriate healthcare according to the experiences of similar, previous cases.
Abstract: The aim of the article is to present a novel method for fuzzy medical image retrieval (FMIR) using vector quantization (VQ) with fuzzy signatures in conjunction with fuzzy S-trees. In past times, a task of similar pictures searching was not based on searching for similar content (e.g. shapes, colour) of the pictures but on the picture name. There exist some methods for the same purpose, but there is still some space for development of more efficient methods. The proposed image retrieval system is used for finding similar images, in our case in the medical area --- in mammography, in addition to the creation of the list of similar images --- cases. The created list is used for assessing the nature of the finding --- whether the medical finding is malignant or benign. The suggested method is compared to the method using Normalized Compression Distance (NCD) instead of fuzzy signatures and fuzzy S-tree. The method with NCD is useful for the creation of the list of similar cases for malignancy assessment, but it is not able to capture the area of interest in the image. The proposed method is going to be added to the complex decision support system to help to determine appropriate healthcare according to the experiences of similar, previous cases.

Journal ArticleDOI
TL;DR: A hybrid approach for medical images registration has been developed that employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method to combine information from different images into a normalized frame for reference.
Abstract: Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.

Journal ArticleDOI
TL;DR: This work designs a symmetric key based authentication protocol for WMSN environment that uses only computationally efficient operations to achieve lightweight attribute and demonstrates the proposed scheme security against active attacks, namely, man-in-the-middle attack and replay attack.
Abstract: Wireless medical sensor networks (WMSN) comprise of distributed sensors, which can sense human physiological signs and monitor the health condition of the patient. It is observed that providing privacy to the patient's data is an important issue and can be challenging. The information passing is done via the public channel in WMSN. Thus, the patient, sensitive information can be obtained by eavesdropping or by unauthorized use of handheld devices which the health professionals use in monitoring the patient. Therefore, there is an essential need of restricting the unauthorized access to the patient's medical information. Hence, the efficient authentication scheme for the healthcare applications is needed to preserve the privacy of the patients' vital signs. To ensure secure and authorized communication in WMSN, we design a symmetric key based authentication protocol for WMSN environment. The proposed protocol uses only computationally efficient operations to achieve lightweight attribute. We analyze the security of the proposed protocol. We use a formal security proof algorithm to show the scheme security against known attacks. We also use the Automated Validation of Internet Security Protocols and Applications (AVISPA) simulator to show protocol secure against man-in-the-middle attack and replay attack. Additionally, we adopt an informal analysis to discuss the key attributes of the proposed scheme. From the formal proof of security, we can see that an attacker has a negligible probability of breaking the protocol security. AVISPA simulator also demonstrates the proposed scheme security against active attacks, namely, man-in-the-middle attack and replay attack. Additionally, through the comparison of computational efficiency and security attributes with several recent results, proposed scheme seems to be battered.

Journal ArticleDOI
TL;DR: A polling-based principal component analysis (PCA) strategy embedded in the machine learning framework to select and retain dominant features, resulting in superior performance and can be adapted in clinical settings.
Abstract: Severe atherosclerosis disease in carotid arteries causes stenosis which in turn leads to stroke. Machine learning systems have been previously developed for plaque wall risk assessment using morphology-based characterization. The fundamental assumption in such systems is the extraction of the grayscale features of the plaque region. Even though these systems have the ability to perform risk stratification, they lack the ability to achieve higher performance due their inability to select and retain dominant features. This paper introduces a polling-based principal component analysis (PCA) strategy embedded in the machine learning framework to select and retain dominant features, resulting in superior performance. This leads to more stability and reliability. The automated system uses offline image data along with the ground truth labels to generate the parameters, which are then used to transform the online grayscale features to predict the risk of stroke. A set of sixteen grayscale plaque features is computed. Utilizing the cross-validation protocol (K = 10), and the PCA cutoff of 0.995, the machine learning system is able to achieve an accuracy of 98.55 and 98.83%corresponding to the carotidfar wall and near wall plaques, respectively. The corresponding reliability of the system was 94.56 and 95.63%, respectively. The automated system was validated against the manual risk assessment system and the precision of merit for same cross-validation settings and PCA cutoffs are 98.28 and 93.92%for the far and the near wall, respectively.PCA-embedded morphology-based plaque characterization shows a powerful strategy for risk assessment and can be adapted in clinical settings.

Journal ArticleDOI
TL;DR: A new computer aided diagnosis (CAD) system for detection of early pulmonary nodule, which can help radiologists quickly locate suspected nodules and make judgments and with the help of this CAD system, radiologist can be provided with a great reference for pulmonary nodules diagnosis timely.
Abstract: Lung cancer is still the most concerned disease around the world. Lung nodule generates in the pulmonary parenchyma which indicates the latent risk of lung cancer. Computer-aided pulmonary nodules detection system is necessary, which can reduce diagnosis time and decrease mortality of patients. In this study, we have proposed a new computer aided diagnosis (CAD) system for detection of early pulmonary nodule, which can help radiologists quickly locate suspected nodules and make judgments. This system consists of four main sections: pulmonary parenchyma segmentation, nodule candidate detection, features extraction (total 22 features) and nodule classification. The publicly available data set created by the Lung Image Database Consortium (LIDC) is used for training and testing. This study selects 6400 slices from 80 CT scans containing totally 978 nodules, which is labeled by four radiologists. Through a fast segmentation method proposed in this paper, pulmonary nodules including 888 true nodules and 11,379 false positive nodules are segmented. By means of an ensemble classifier, Random Forest (RF), this study acquires 93.2, 92.4, 94.8, 97.6% of accuracy, sensitivity, specificity, area under the curve (AUC), respectively. Compared with support vector machine (SVM) classifier, RF can reduce more false positive nodules and acquire larger AUC. With the help of this CAD system, radiologist can be provided with a great reference for pulmonary nodule diagnosis timely.

Journal ArticleDOI
TL;DR: A new automated method for classifying the heart status using a rule-based classification tree into normal and three abnormal cases; namely the aortic valve stenosis, aorta insufficient, and ventricular septum defect is presented.
Abstract: In order to assist the diagnosis procedure of heart sound signals, this paper presents a new automated method for classifying the heart status using a rule-based classification tree into normal and three abnormal cases; namely the aortic valve stenosis, aortic insufficient, and ventricular septum defect. The developed method includes three main steps as follows. First, one cycle of the heart sound signals is automatically detected and segmented based on time properties of the heart signals. Second, the segmented cycle is preprocessed with the discrete wavelet transform and then largest Lyapunov exponents are calculated to generate the dynamical features of heart sound time series. Finally, a rule-based classification tree is fed by these Lyapunov exponents to give the final decision of the heart health status. The developed method has been tested successfully on twenty-two datasets of normal heart sounds and murmurs with success rate of 95.5%. The resulting error can be easily corrected by modifying the classification rules; consequently, the accuracy of automated heart sounds diagnosis is further improved.

Journal ArticleDOI
TL;DR: It is concluded elite soldiers presented in combat a higher anaerobic metabolism activation and muscular strength than non-elite soldiers, but cardiovascular, cortical, and muscle strength manifestation presented the same response in both elite and non-Elite soldiers.
Abstract: We aimed to analyse the effect of combat stress in the psychophysiological responses of elite and non-elite soldiers We analysed heart rate, cortical arousal, skin temperature, blood lactate concentration and lower body muscular strength before and after a tactical combat simulation in 40 warfighters divided in two groups: elite (n: 20; 285 ± 638 years) and non-elite (n:20; 3194 ± 624 years) group Elite presented a significantly higher lactate concentration after combat than non elite soldiers (38 ± 15 vs 66 ± 13 mmol/L) Non-elite soldiers had a higher heart rate pre and post the simulation than elite (829 ± 123 vs 644 ± 11 pre non elite and elite respectively; 930 ± 128 vs 88 ± 138 bpm post non elite and elite respectively) Elite soldiers presented higher lower muscular strength than elite in all test and before and after the combat simulation Cortical arousal was not modified significantly in both groups We conclude elite soldiers presented in combat a higher anaerobic metabolism activation and muscular strength than non-elite soldiers, but cardiovascular, cortical, and muscular strength manifestation presented the same response in both elite and non-elite soldiers

Journal ArticleDOI
TL;DR: This work proposes a novel symmetric encryption algorithm based on logistic map with double chaotic layer encryption (DCLE) in diffusion process and just one round of confusion-diffusion for the confidentiality and privacy of clinical information such as electrocardiograms, electroencephalograms, and blood pressure for applications in telemedicine.
Abstract: Recently, telemedicine offers medical services remotely via telecommunications systems and physiological monitoring devices. This scheme provides healthcare delivery services between physicians and patients conveniently, since some patients can not attend the hospital due to any reason. However, transmission of information over an insecure channel such as internet or private data storing generates a security problem. Therefore, authentication, confidentiality, and privacy are important challenges in telemedicine, where only authorized users should have access to medical or clinical records. On the other hand, chaotic systems have been implemented efficiently in cryptographic systems to provide confidential and privacy. In this work, we propose a novel symmetric encryption algorithm based on logistic map with double chaotic layer encryption (DCLE) in diffusion process and just one round of confusion-diffusion for the confidentiality and privacy of clinical information such as electrocardiograms (ECG), electroencephalograms (EEG), and blood pressure (BP) for applications in telemedicine. The clinical signals are acquired from PhysioBank data base for encryption proposes and analysis. In contrast with recent schemes in literature, we present a secure cryptographic algorithm based on chaos validated with the most complete security analysis until this time. In addition, the cryptograms are validated with the most complete pseudorandomness tests based on National Institute of Standards and Technology (NIST) 800-22 suite. All results are at MATLAB simulations and all them show the effectiveness, security, robustness, and the potential use of the proposed scheme in telemedicine.

Journal ArticleDOI
TL;DR: The sources and techniques of Big Data used in the health sector represent a relevant factor in terms of effectiveness, since it allows the application of predictive analysis techniques in tasks such as: identification of patients at risk of reentry or prevention of hospital or chronic diseases infections, obtaining predictive models of quality.
Abstract: The main objective of this paper is to present a review of existing researches in the literature, referring to Big Data sources and techniques in health sector and to identify which of these techniques are the most used in the prediction of chronic diseases. Academic databases and systems such as IEEE Xplore, Scopus, PubMed and Science Direct were searched, considering the date of publication from 2006 until the present time. Several search criteria were established as 'techniques' OR 'sources' AND 'Big Data' AND 'medicine' OR 'health', 'techniques' AND 'Big Data' AND 'chronic diseases', etc. Selecting the paper considered of interest regarding the description of the techniques and sources of Big Data in healthcare. It found a total of 110 articles on techniques and sources of Big Data on health from which only 32 have been identified as relevant work. Many of the articles show the platforms of Big Data, sources, databases used and identify the techniques most used in the prediction of chronic diseases. From the review of the analyzed research articles, it can be noticed that the sources and techniques of Big Data used in the health sector represent a relevant factor in terms of effectiveness, since it allows the application of predictive analysis techniques in tasks such as: identification of patients at risk of reentry or prevention of hospital or chronic diseases infections, obtaining predictive models of quality.

Journal ArticleDOI
TL;DR: In this paper, the authors report the development of a planning framework for telemedicine services based on needs assessment, which is based on the key processes in need assessment, Penchansky and Thomas's dimensions of access, and Bradshaw's types of need.
Abstract: Providing equitable access to healthcare services in rural and remote communities is an ongoing challenge that faces most governments. By increasing access to specialty expertise, telemedicine may be a potential solution to this problem. Regardless of its potential, many telemedicine initiatives do not progress beyond the research phase, and are not implemented into mainstream practice. One reason may be that some telemedicine services are developed without the appropriate planning to ascertain community needs and clinical requirements. The aim of this paper is to report the development of a planning framework for telemedicine services based on needs assessment. The presented framework is based on the key processes in needs assessment, Penchansky and Thomas's dimensions of access, and Bradshaw's types of need. This proposed planning framework consists of two phases. Phase one comprises data collection and needs assessment, and includes assessment of availability and expressed needs; accessibility; perception and affordability. Phase two involves prioritising the demand for health services, balanced against the known limitations of supply, and the implementation of an appropriate telemedicine service that reflects and meets the needs of the community. Using a structured framework for the planning of telemedicine services, based on need assessment, may help with the identification and prioritisation of community health needs.

Journal ArticleDOI
TL;DR: It is found that live videoconference consultations are generally well accepted by both clients and clinicians, and these have been successfully used in several genetic counseling settings in practice.
Abstract: Although telegenetics as a telehealth tool for online genetic counseling was primarily initiated to improve access to genetics care in remote areas, the increasing demand for genetic services with personalized genomic medicine, shortage of clinical geneticists, and the expertise of established genetic centers make telegenetics an attractive alternative to traditional in-person genetic counseling. We review the scope of current telegenetics practice, user experience of patients and clinicians, quality of care in comparison to traditional counseling, and the advantages and disadvantages of information and communication technology in telegenetics. We found that live videoconference consultations are generally well accepted by both clients and clinicians, and these have been successfully used in several genetic counseling settings in practice. Future use of telegenetics could increase patients' access to specialized care and help in meeting the increasing demand for genetic services.

Journal ArticleDOI
TL;DR: The findings of the current study provide an insight on the frequency of citations for top cited articles published in Medical Informatics as well as quality of the works, journals, and the trends steering Medical Informics.
Abstract: The number of citations that a research paper receives can be used as a measure of its scientific impact. The objective of this study was to identify and to examine the characteristics of top 100 cited articles in the field of Medical Informatics based on data acquired from the Thomson Reuters’ Web of Science (WOS) in October, 2016. The data was collected using two procedures: first we included articles published in the 24 journals listed in the “Medical Informatics” category; second, we retrieved articles using the key words: “informatics”, “medical informatics”, “bi omedical informatics”, ”clinical informatics” and “health informatics”. After removing duplicate records, articles were ranked by the number of citations they received. When the 100 top cited articles had been identified, we collected the following information for each record: all WOS database citations, year of publication, journal, author names, authors’ affiliation, country of origin and topics indexed for each record. Citations for the top 100 articles ranged from 346 to 7875, and citations per year ranged from 11.12 to 525. The majority of articles were published in the 2000s (n=43) and 1990s (n=38). Articles were published across 10 journals, most commonly Statistics in medicine (n=71) and Medical decision making (n=28). The articles had an average of 2.47 authors. Statistics and biostatistics modeling was the most common topic (n=71), followed by artificial intelligence (n=12), and medical errors (n=3), other topics included data mining, diagnosis, bioinformatics, information retrieval, and medical imaging. Our bibliometric analysis illustrated a historical perspective on the progress of scientific research on Medical Informatics. Moreover, the findings of the current study provide an insight on the frequency of citations for top cited articles published in Medical Informatics as well as quality of the works, journals, and the trends steering Medical Informatics.

Journal ArticleDOI
TL;DR: A three-category classification system to detect the specific category of hearing loss, which is beneficial to be treated in time for patients, and which is 4% higher than the best of state-of-the-art approaches.
Abstract: Hearing loss, a partial or total inability to hear, is known as hearing impairment. Untreated hearing loss can have a bad effect on normal social communication, and it can cause psychological problems in patients. Therefore, we design a three-category classification system to detect the specific category of hearing loss, which is beneficial to be treated in time for patients. Before the training and test stages, we use the technology of data augmentation to produce a balanced dataset. Then we use deep autoencoder neural network to classify the magnetic resonance brain images. In the stage of deep autoencoder, we use stacked sparse autoencoder to generate visual features, and softmax layer to classify the different brain images into three categories of hearing loss. Our method can obtain good experimental results. The overall accuracy of our method is 99.5%, and the time consuming is 0.078 s per brain image. Our proposed method based on stacked sparse autoencoder works well in classification of hearing loss images. The overall accuracy of our method is 4% higher than the best of state-of-the-art approaches.

Journal ArticleDOI
TL;DR: Findings suggest that optimal feature set combination would yield a relatively high decoding accuracy that may improve the clinical robustness of MDoF neuroprosthesis.
Abstract: To control multiple degrees of freedom (MDoF) upper limb prostheses, pattern recognition (PR) of electromyogram (EMG) signals has been successfully applied. This technique requires amputees to provide sufficient EMG signals to decode their limb movement intentions (LMIs). However, amputees with neuromuscular disorder/high level amputation often cannot provide sufficient EMG control signals, and thus the applicability of the EMG-PR technique is limited especially to this category of amputees. As an alternative approach, electroencephalograph (EEG) signals recorded non-invasively from the brain have been utilized to decode the LMIs of humans. However, most of the existing EEG based limb movement decoding methods primarily focus on identifying limited classes of upper limb movements. In addition, investigation on EEG feature extraction methods for the decoding of multiple classes of LMIs has rarely been considered. Therefore, 32 EEG feature extraction methods (including 12 spectral domain descriptors (SDDs) and 20 time domain descriptors (TDDs)) were used to decode multiple classes of motor imagery patterns associated with different upper limb movements based on 64-channel EEG recordings. From the obtained experimental results, the best individual TDD achieved an accuracy of 67.05 ± 3.12% as against 87.03 ± 2.26% for the best SDD. By applying a linear feature combination technique, an optimal set of combined TDDs recorded an average accuracy of 90.68% while that of the SDDs achieved an accuracy of 99.55% which were significantly higher than those of the individual TDD and SDD at p < 0.05. Our findings suggest that optimal feature set combination would yield a relatively high decoding accuracy that may improve the clinical robustness of MDoF neuroprosthesis. Trial registration: The study was approved by the ethics committee of Institutional Review Board of Shenzhen Institutes of Advanced Technology, and the reference number is SIAT-IRB-150515-H0077.