scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Healthcare Engineering in 2018"


Journal ArticleDOI
TL;DR: The feasibility of classifying the chest pathologies in chest X-rays using conventional and deep learning approaches is demonstrated and results in terms of accuracy, error rate, and training time between the networks are presented.
Abstract: Chest diseases are very serious health problems in the life of people. These diseases include chronic obstructive pulmonary disease, pneumonia, asthma, tuberculosis, and lung diseases. The timely diagnosis of chest diseases is very important. Many methods have been developed for this purpose. In this paper, we demonstrate the feasibility of classifying the chest pathologies in chest X-rays using conventional and deep learning approaches. In the paper, convolutional neural networks (CNNs) are presented for the diagnosis of chest diseases. The architecture of CNN and its design principle are presented. For comparative purpose, backpropagation neural networks (BPNNs) with supervised learning, competitive neural networks (CpNNs) with unsupervised learning are also constructed for diagnosis chest diseases. All the considered networks CNN, BPNN, and CpNN are trained and tested on the same chest X-ray database, and the performance of each network is discussed. Comparative results in terms of accuracy, error rate, and training time between the networks are presented.

208 citations


Journal ArticleDOI
TL;DR: This paper focuses on the process of EMR processing and emphatically analyzes the key techniques and makes an in-depth study on the applications developed based on text mining together with the open challenges and research issues for future work.
Abstract: Currently, medical institutes generally use EMR to record patient’s condition, including diagnostic information, procedures performed, and treatment results. EMR has been recognized as a valuable resource for large-scale analysis. However, EMR has the characteristics of diversity, incompleteness, redundancy, and privacy, which make it difficult to carry out data mining and analysis directly. Therefore, it is necessary to preprocess the source data in order to improve data quality and improve the data mining results. Different types of data require different processing technologies. Most structured data commonly needs classic preprocessing technologies, including data cleansing, data integration, data transformation, and data reduction. For semistructured or unstructured data, such as medical text, containing more health information, it requires more complex and challenging processing methods. The task of information extraction for medical texts mainly includes NER (named-entity recognition) and RE (relation extraction). This paper focuses on the process of EMR processing and emphatically analyzes the key techniques. In addition, we make an in-depth study on the applications developed based on text mining together with the open challenges and research issues for future work.

170 citations


Journal ArticleDOI
TL;DR: A novel fully automatic segmentation method from MRI data containing in vivo brain gliomas that can not only localize the entire tumor region but can also accurately segment the intratumor structure is presented.
Abstract: Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.

138 citations


Journal ArticleDOI
TL;DR: The potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery and the perceived overall workload was low, and the self-assessed performance was considered satisfactory.
Abstract: Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator.

136 citations


Journal ArticleDOI
TL;DR: This study developed and compared three machine learning algorithms to estimate BPs using PPG only and revealed that the regression tree algorithm was the best approach with overall acceptable accuracy to ISO standard for BP device validation.
Abstract: Introduction. Blood pressure (BP) has been a potential risk factor for cardiovascular diseases. BP measurement is one of the most useful parameters for early diagnosis, prevention, and treatment of cardiovascular diseases. At present, BP measurement mainly relies on cuff-based techniques that cause inconvenience and discomfort to users. Although some of the present prototype cuffless BP measurement techniques are able to reach overall acceptable accuracies, they require an electrocardiogram (ECG) and a photoplethysmograph (PPG) that make them unsuitable for true wearable applications. Therefore, developing a single PPG-based cuffless BP estimation algorithm with enough accuracy would be clinically and practically useful. Methods. The University of Queensland vital sign dataset (online database) was accessed to extract raw PPG signals and its corresponding reference BPs (systolic BP and diastolic BP). The online database consisted of PPG waveforms of 32 cases from whom 8133 (good quality) signal segments (5 s for each) were extracted, preprocessed, and normalised in both width and amplitude. Three most significant pulse features (pulse area, pulse rising time, and width 25%) with their corresponding reference BPs were used to train and test three machine learning algorithms (regression tree, multiple linear regression (MLR), and support vector machine (SVM)). A 10-fold cross-validation was applied to obtain overall BP estimation accuracy, separately for the three machine learning algorithms. Their estimation accuracies were further analysed separately for three clinical BP categories (normotensive, hypertensive, and hypotensive). Finally, they were compared with the ISO standard for noninvasive BP device validation (average difference no greater than 5 mmHg and SD no greater than 8 mmHg). Results. In terms of overall estimation accuracy, the regression tree achieved the best overall accuracy for SBP (mean and SD of difference: −0.1 ± 6.5 mmHg) and DBP (mean and SD of difference: −0.6 ± 5.2 mmHg). MLR and SVM achieved the overall mean difference less than 5 mmHg for both SBP and DBP, but their SD of difference was >8 mmHg. Regarding the estimation accuracy in each BP categories, only the regression tree achieved acceptable ISO standard for SBP (−1.1 ± 5.7 mmHg) and DBP (−0.03 ± 5.6 mmHg) in the normotensive category. MLR and SVM did not achieve acceptable accuracies in any BP categories. Conclusion. This study developed and compared three machine learning algorithms to estimate BPs using PPG only and revealed that the regression tree algorithm was the best approach with overall acceptable accuracy to ISO standard for BP device validation. Furthermore, this study demonstrated that the regression tree algorithm achieved acceptable measurement accuracy only in the normotensive category, suggesting that future algorithm development for BP estimation should be more specific for different BP categories.

121 citations


Journal ArticleDOI
TL;DR: A method for estimating systolic and diastolic BP based only on a PPG signal is developed, using the multitaper method (MTM) for feature extraction, and an artificial neural network (ANN) for estimation.
Abstract: The prevention, evaluation, and treatment of hypertension have attracted increasing attention in recent years. As photoplethysmography (PPG) technology has been widely applied to wearable sensors, the noninvasive estimation of blood pressure (BP) using the PPG method has received considerable interest. In this paper, a method for estimating systolic and diastolic BP based only on a PPG signal is developed. The multitaper method (MTM) is used for feature extraction, and an artificial neural network (ANN) is used for estimation. Compared with previous approaches, the proposed method obtains better accuracy; the mean absolute error is 4.02 ± 2.79 mmHg for systolic BP and 2.27 ± 1.82 mmHg for diastolic BP.

99 citations


Journal ArticleDOI
TL;DR: An intelligent architecture that takes into account both, physiological and cognitive aspects to reduce the degree of obesity and overweight is proposed.
Abstract: According to World Health Organization (WHO) estimations, one out of five adults worldwide will be obese by 2025. Worldwide obesity has doubled since 1980. In fact, more than 1.9 billion adults (39%) of 18 years and older were overweight and over 600 million (13%) of these were obese in 2014. 42 million children under the age of five were overweight or obese in 2014. Obesity is a top public health problem due to its associated morbidity and mortality. This paper reviews the main techniques to measure the level of obesity and body fat percentage, and explains the complications that can carry to the individual's quality of life, longevity and the significant cost of healthcare systems. Researchers and developers are adapting the existing technology, as intelligent phones or some wearable gadgets to be used for controlling obesity. They include the promoting of healthy eating culture and adopting the physical activity lifestyle. The paper also shows a comprehensive study of the most used mobile applications and Wireless Body Area Networks focused on controlling the obesity and overweight. Finally, this paper proposes an intelligent architecture that takes into account both, physiological and cognitive aspects to reduce the degree of obesity and overweight.

86 citations


Journal ArticleDOI
TL;DR: This review focuses on innovative studies of the use of Raman scattering in cancer diagnosis and their potential to transition from bench to bedside.
Abstract: Raman scattering has long been used to analyze chemical compositions in biological systems. Owing to its high chemical specificity and noninvasive detection capability, Raman scattering has been widely employed in cancer screening, diagnosis, and intraoperative surgical guidance in the past ten years. In order to overcome the weak signal of spontaneous Raman scattering, coherent Raman scattering and surface-enhanced Raman scattering have been developed and recently applied in the field of cancer research. This review focuses on innovative studies of the use of Raman scattering in cancer diagnosis and their potential to transition from bench to bedside.

78 citations


Journal ArticleDOI
TL;DR: Ten widely used and high-efficient QRS detection algorithms were evaluated, aiming at verifying their performances and usefulness in different application situations, and the time costs from analyzing a 10 s ECG segment were given as the quantitative index of the computational complexity.
Abstract: A systematical evaluation work was performed on ten widely used and high-efficient QRS detection algorithms in this study, aiming at verifying their performances and usefulness in different application situations. Four experiments were carried on six internationally recognized databases. Firstly, in the test of high-quality ECG database versus low-quality ECG database, for high signal quality database, all ten QRS detection algorithms had very high detection accuracy ( >99%), whereas the results decrease significantly for the poor signal-quality ECG signals (all results for these two databases (all >95% except RS slope algorithm with 94.24% on normal ECG database and 94.44% on arrhythmia database). Thirdly, for the paced rhythm ECG database, all ten algorithms were immune to the paced beats (>94%) except the RS slope method, which only output a low result of 78.99%. At last, the detection accuracies had obvious decreases when dealing with the dynamic telehealth ECG signals (all <80%) except OKB algorithm with 80.43%. Furthermore, the time costs from analyzing a 10 s ECG segment were given as the quantitative index of the computational complexity. All ten algorithms had high numerical efficiency (all <4 ms) except RS slope (94.07 ms) and sixth power algorithms (8.25 ms). And OKB algorithm had the highest numerical efficiency (1.54 ms).

75 citations


Journal ArticleDOI
TL;DR: The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincaré plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis.
Abstract: Electrocardiogram (ECG) signal analysis has received special attention of the researchers in the recent past because of its ability to divulge crucial information about the electrophysiology of the heart and the autonomic nervous system activity in a noninvasive manner. Analysis of the ECG signals has been explored using both linear and nonlinear methods. However, the nonlinear methods of ECG signal analysis are gaining popularity because of their robustness in feature extraction and classification. The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincare plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis.

66 citations


Journal ArticleDOI
TL;DR: There is an urgent need for in vitro urinary tract models to facilitate faster research and development for CAUTI prevention and ICs are increasingly seen as a solution to the complications caused by IDs as ICs pose no risk of biofilm formation due to their short time in the body and a lower risk of bladder stone formation.
Abstract: Catheter-associated urinary tract infections (CAUTIs) are one of the most common nosocomial infections and can lead to numerous medical complications from the mild catheter encrustation and bladder stones to the severe septicaemia, endotoxic shock, and pyelonephritis. Catheters are one of the most commonly used medical devices in the world and can be characterised as either indwelling (ID) or intermittent catheters (IC). The primary challenges in the use of IDs are biofilm formation and encrustation. ICs are increasingly seen as a solution to the complications caused by IDs as ICs pose no risk of biofilm formation due to their short time in the body and a lower risk of bladder stone formation. Research on IDs has focused on the use of antimicrobial and antibiofilm compounds, while research on ICs has focused on preventing bacteria entering the urinary tract or coming into contact with the catheter. There is an urgent need for in vitro urinary tract models to facilitate faster research and development for CAUTI prevention. There are currently three urinary tract models that test IDs; however, there is only a single very limited model for testing ICs. There is currently no standardised urinary tract model to test the efficacies of ICs.

Journal ArticleDOI
TL;DR: System's performance showed that it has a potential to be used for hand rehabilitation of stroke patients, and was designed using a bank of temporal filters, the common spatial pattern algorithm for feature extraction and particle swarm optimisation for feature selection.
Abstract: Motor imagery-based brain-computer interfaces (BCI) have shown potential for the rehabilitation of stroke patients; however, low performance has restricted their application in clinical environments. Therefore, this work presents the implementation of a BCI system, coupled to a robotic hand orthosis and driven by hand motor imagery of healthy subjects and the paralysed hand of stroke patients. A novel processing stage was designed using a bank of temporal filters, the common spatial pattern algorithm for feature extraction and particle swarm optimisation for feature selection. Offline tests were performed for testing the proposed processing stage, and results were compared with those computed with common spatial patterns. Afterwards, online tests with healthy subjects were performed in which the orthosis was activated by the system. Stroke patients’ average performance was 74.1 ± 11%. For 4 out of 6 patients, the proposed method showed a statistically significant higher performance than the common spatial pattern method. Healthy subjects’ average offline and online performances were of 76.2 ± 7.6% and 70 ± 6.7, respectively. For 3 out of 8 healthy subjects, the proposed method showed a statistically significant higher performance than the common spatial pattern method. System’s performance showed that it has a potential to be used for hand rehabilitation of stroke patients.

Journal ArticleDOI
TL;DR: The proposed automatic pneumothorax detection method is based on multiscale intensity texture segmentation by removing the background and noises in chest images for segmenting abnormal lung regions and the rib boundaries are identified with Sobel edge detection.
Abstract: Automatic image segmentation and feature analysis can assist doctors in the treatment and diagnosis of diseases more accurately. Automatic medical image segmentation is difficult due to the varying image quality among equipment. In this paper, the automatic method employed image multiscale intensity texture analysis and segmentation to solve this problem. In this paper, firstly, SVM is applied to identify common pneumothorax. Features are extracted from lung images with the LBP (local binary pattern). Then, classification of pneumothorax is determined by SVM. Secondly, the proposed automatic pneumothorax detection method is based on multiscale intensity texture segmentation by removing the background and noises in chest images for segmenting abnormal lung regions. The segmentation of abnormal regions is used for texture transformed from computing multiple overlapping blocks. The rib boundaries are identified with Sobel edge detection. Finally, in obtaining a complete disease region, the rib boundary is filled up and located between the abnormal regions.

Journal ArticleDOI
TL;DR: A systematic literature review is conducted to identify the contribution of robotics for upper limb neurorehabilitation, highlighting its relation with the rehabilitation cycle, and to clarify the prospective research directions in the development of more autonomous rehabilitation processes.
Abstract: Robot-mediated neurorehabilitation is a growing field that seeks to incorporate advances in robotics combined with neuroscience and rehabilitation to define new methods for treating problems related with neurological diseases. In this paper, a systematic literature review is conducted to identify the contribution of robotics for upper limb neurorehabilitation, highlighting its relation with the rehabilitation cycle, and to clarify the prospective research directions in the development of more autonomous rehabilitation processes. With this aim, first, a study and definition of a general rehabilitation process are made, and then, it is particularized for the case of neurorehabilitation, identifying the components involved in the cycle and their degree of interaction between them. Next, this generic process is compared with the current literature in robotics focused on upper limb treatment, analyzing which components of this rehabilitation cycle are being investigated. Finally, the challenges and opportunities to obtain more autonomous rehabilitation processes are discussed. In addition, based on this study, a series of technical requirements that should be taken into account when designing and implementing autonomous robotic systems for rehabilitation is presented and discussed.

Journal ArticleDOI
TL;DR: The biocompatible and injectable nanocomposite scaffold might have great potential to apply for wound healing and Histopathologic examination implied that the nanocomposition hydrogel based on nanocurcumin and chitosan could enhance burn wound repair.
Abstract: Burn wound healing is a complex multifactorial process that relies on coordinated signaling molecules to succeed. Curcumin is believed to be a potent antioxidant and anti-inflammatory agent; therefore, it can prevent the prolonged presence of oxygen free radicals which is a significant factor causing inhabitation of optimum healing process. This study describes an extension of study about the biofunctional nanocomposite hydrogel platform that was prepared by using curcumin and an amphiphilic chitosan-g-pluronic copolymer specialized in burn wound healing application. This formular (nCur-CP, nanocomposite hydrogel) was a free-flowing sol at ambient temperature and instantly converted into a nonflowing gel at body temperature. In addition, the storage study determined the great stability level of nCur-CP in long time using UV-Vis and DLS. Morphology and distribution of nCur in its nanocomposite hydrogels were observed by SEM and TEM, respectively. In vitro studies suggested that nCur-CP exhibited well fibroblast proliferation and ability in antimicrobacteria. Furthermore, second- and third-degree burn wound models were employed to evaluate the in vivo wound healing activity of the nCur-CP. In the second-degree wound model, the nanocomposite hydrogel group showed a higher regenerated collagen density and thicker epidermis layer formation. In third degree, the nCur-CP group also exhibited enhancement of wound closure. Besides, in both models, the nanocomposite material-treated groups showed higher collagen content, better granulation, and higher wound maturity. Histopathologic examination also implied that the nanocomposite hydrogel based on nanocurcumin and chitosan could enhance burn wound repair. In conclusion, the biocompatible and injectable nanocomposite scaffold might have great potential to apply for wound healing.

Journal ArticleDOI
TL;DR: A novel stepwise fine-tuning-based deep learning scheme capable of making the deep neural network imitating the pathologist's perception manner and of acquiring pathology-related knowledge in advance, but with very limited extra cost in data annotation is presented.
Abstract: Deep learning using convolutional neural networks (CNNs) is a distinguished tool for many image classification tasks. Due to its outstanding robustness and generalization, it is also expected to play a key role to facilitate advanced computer-aided diagnosis (CAD) for pathology images. However, the shortage of well-annotated pathology image data for training deep neural networks has become a major issue at present because of the high-cost annotation upon pathologist’s professional observation. Faced with this problem, transfer learning techniques are generally used to reinforcing the capacity of deep neural networks. In order to further boost the performance of the state-of-the-art deep neural networks and alleviate insufficiency of well-annotated data, this paper presents a novel stepwise fine-tuning-based deep learning scheme for gastric pathology image classification and establishes a new type of target-correlative intermediate datasets. Our proposed scheme is deemed capable of making the deep neural network imitating the pathologist’s perception manner and of acquiring pathology-related knowledge in advance, but with very limited extra cost in data annotation. The experiments are conducted with both well-annotated gastric pathology data and the proposed target-correlative intermediate data on several state-of-the-art deep neural networks. The results congruously demonstrate the feasibility and superiority of our proposed scheme for boosting the classification performance.

Journal ArticleDOI
TL;DR: This work explored the use of chitosan (Cs) and poly(ethylene oxide) (PEO) blends for the fabrication of electrospun fiber-orientated meshes potentially suitable for engineering fiber-reinforced soft tissues such as tendons, ligaments, or meniscus.
Abstract: This work explored the use of chitosan (Cs) and poly(ethylene oxide) (PEO) blends for the fabrication of electrospun fiber-orientated meshes potentially suitable for engineering fiber-reinforced soft tissues such as tendons, ligaments, or meniscus. To mimic the fiber alignment present in native tissue, the CS/PEO blend solution was electrospun using a traditional static plate, a rotating drum collector, and a rotating disk collector to get, respectively, random, parallel, circumferential-oriented fibers. The effects of the different orientations (parallel or circumferential) and high-speed rotating collector influenced fiber morphology, leading to a reduction in nanofiber diameters and an improvement in mechanical properties.

Journal ArticleDOI
TL;DR: The iPhone application yielded good results on PPG-based PRV indices compared to ECG-based HRV indices and to differences among ECG channels, and plans to extend the results on the P PG-ECG correspondence with a deeper analysis of the differentECG channels.
Abstract: Background. Heart rate variability (HRV) provides information about the activity of the autonomic nervous system. Because of the small amount of data collected, the importance of HRV has not yet been proven in clinical practice. To collect population-level data, smartphone applications leveraging photoplethysmography (PPG) and some medical knowledge could provide the means for it. Objective. To assess the capabilities of our smartphone application, we compared PPG (pulse rate variability (PRV)) with ECG (HRV). To have a baseline, we also compared the differences among ECG channels. Method. We took fifty parallel measurements using iPhone 6 at a 240 Hz sampling frequency and Cardiax PC-ECG devices. The correspondence between the PRV and HRV indices was investigated using correlation, linear regression, and Bland-Altman analysis. Results. High PPG accuracy: the deviation of PPG-ECG is comparable to that of ECG channels. Mean deviation between PPG-ECG and two ECG channels: RR: 0.01 ms–0.06 ms, SDNN: 0.78 ms–0.46 ms, RMSSD: 1.79 ms–1.21 ms, and pNN50: 2.43%–1.63%. Conclusions. Our iPhone application yielded good results on PPG-based PRV indices compared to ECG-based HRV indices and to differences among ECG channels. We plan to extend our results on the PPG-ECG correspondence with a deeper analysis of the different ECG channels.

Journal ArticleDOI
TL;DR: A framework for learning healthcare data with imbalanced distribution via incorporating different rebalancing strategies is developed and showed that the developed framework can significantly improve the detection accuracy of medical incidents due to look-alikes sound-alike (LASA) mix-ups.
Abstract: Identifying rare but significant healthcare events in massive unstructured datasets has become a common task in healthcare data analytics. However, imbalanced class distribution in many practical datasets greatly hampers the detection of rare events, as most classification methods implicitly assume an equal occurrence of classes and are designed to maximize the overall classification accuracy. In this study, we develop a framework for learning healthcare data with imbalanced distribution via incorporating different rebalancing strategies. The evaluation results showed that the developed framework can significantly improve the detection accuracy of medical incidents due to look-alike sound-alike (LASA) mix-ups. Specifically, logistic regression combined with the synthetic minority oversampling technique (SMOTE) produces the best detection results, with a significant 45.3% increase in recall (recall = 75.7%) compared with pure logistic regression (recall = 52.1%).

Journal ArticleDOI
TL;DR: It is indicated that it is possible to accurately identify AF or non-AF ECGs from a short-term signal episode from the MIT-BIH Atrial Fibrillation Database.
Abstract: Atrial fibrillation (AF) is a serious cardiovascular disease with the phenomenon of beating irregularly. It is the major cause of variety of heart diseases, such as myocardial infarction. Automatic AF beat detection is still a challenging task which needs further exploration. A new framework, which combines modified frequency slice wavelet transform (MFSWT) and convolutional neural networks (CNNs), was proposed for automatic AF beat identification. MFSWT was used to transform 1 s electrocardiogram (ECG) segments to time-frequency images, and then, the images were fed into a 12-layer CNN for feature extraction and AF/non-AF beat classification. The results on the MIT-BIH Atrial Fibrillation Database showed that a mean accuracy (Acc) of 81.07% from 5-fold cross validation is achieved for the test data. The corresponding sensitivity (Se), specificity (Sp), and the area under the ROC curve (AUC) results are 74.96%, 86.41%, and 0.88, respectively. When excluding an extremely poor signal quality ECG recording in the test data, a mean Acc of 84.85% is achieved, with the corresponding Se, Sp, and AUC values of 79.05%, 89.99%, and 0.92. This study indicates that it is possible to accurately identify AF or non-AF ECGs from a short-term signal episode.

Journal ArticleDOI
TL;DR: This research presents a novel and scalable approach to solve the challenge of integrating information technology and human interaction in the field of medicine.
Abstract: College of Biomedical Engineering, South-Central University for Nationalities, Wuhan 430074, China Key Laboratory of Cognitive Science, State Ethnic Affairs Commission, Wuhan 430074, China Hubei Key Laboratory of Medical Information Analysis and Tumor Diagnosis & Treatment, Wuhan 430074, China School of Information Technology, Jiangxi University of Finance and Economics, Nanchang 330032, China IT Convergence Research Center, Chonbuk National University, Jeonju, Jeonbuk 54896, Republic of Korea

Journal ArticleDOI
TL;DR: This study includes analyses of recent research on operating room scheduling and planning, from 2000 to the present day, according to patient characteristics, performance measures, solution techniques used in the research, the uncertainty of the problem, applicability of theResearch, and the planning strategy to be dealt within the solution.
Abstract: Increased healthcare costs are pushing hospitals to reduce costs and increase the quality of care. Operating rooms are the most important source of income and expense for hospitals. Therefore, the hospital management focuses on the effectiveness of schedules and plans. This study includes analyses of recent research on operating room scheduling and planning. Most studies in the literature, from 2000 to the present day, were evaluated according to patient characteristics, performance measures, solution techniques used in the research, the uncertainty of the problem, applicability of the research, and the planning strategy to be dealt within the solution. One hundred seventy studies were examined in detail, after scanning the Emerald, Science Direct, JSTOR, Springer, Taylor and Francis, and Google Scholar databases. To facilitate the identification of these studies, they are grouped according to the different criteria of concern and then, a detailed overview is presented.

Journal ArticleDOI
TL;DR: The findings of the study suggest that although the respondents' knowledge of telemedicine is limited, most of them have good attitude toward telemeding, and the need of giving training on telemedics in order to fill the knowledge gap is underlined.
Abstract: Background. In resource-limited environments, such as those categorized as underdeveloped countries, telemedicine becomes viewed as effective channel for utilizing the scarce medical resources and infrastructures. The aim of this study was to assess knowledge and attitude toward telemedicine among cross section of health professionals’ working in three hospitals in North West Ethiopia. Methods. An institution-based cross-sectional study was conducted among 312 health professionals working in three different hospitals of North Gondar Administrative Zone during November 13 to December 10, 2017. Data were collected using structured self-administered questionnaires. Data entry and analysis were done using SPSS version 20. The mean, percentage, and standard deviation were calculated to describe the characteristics of respondents. The chi-square test was used as appropriate, to evaluate the statistical significance of the differences between the responses of the participants. A value of 5 years of work experience. 191 (64.0%) respondents had good attitude toward telemedicine. Conclusion. The findings of the study suggest that although the respondents’ knowledge of telemedicine is limited, most of them have good attitude toward telemedicine. This study underlined the need of giving training on telemedicine in order to fill the knowledge gap.

Journal ArticleDOI
TL;DR: An objective machine-learning classification model for classifying glaucomatous optic discs with relatively high performance without requiring color fundus images is developed and the NN had the best classification performance with a validated accuracy of 87.8% using only nine ocular parameters.
Abstract: This study develops an objective machine-learning classification model for classifying glaucomatous optic discs and reveals the classificatory criteria to assist in clinical glaucoma management. In this study, 163 glaucoma eyes were labelled with four optic disc types by three glaucoma specialists and then randomly separated into training and test data. All the images of these eyes were captured using optical coherence tomography and laser speckle flowgraphy to quantify the ocular structure and blood-flow-related parameters. A total of 91 parameters were extracted from each eye along with the patients’ background information. Machine-learning classifiers, including the neural network (NN), naive Bayes (NB), support vector machine (SVM), and gradient boosted decision trees (GBDT), were trained to build the classification models, and a hybrid feature selection method that combines minimum redundancy maximum relevance and genetic-algorithm-based feature selection was applied to find the most valid and relevant features for NN, NB, and SVM. A comparison of the performance of the three machine-learning classification models showed that the NN had the best classification performance with a validated accuracy of 87.8% using only nine ocular parameters. These selected quantified parameters enabled the trained NN to classify glaucomatous optic discs with relatively high performance without requiring color fundus images.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the natural image features can be well transferred to represent eye-tracking data, and strabismus can be effectively recognized by the proposed method.
Abstract: Strabismus is one of the most common vision diseases that would cause amblyopia and even permanent vision loss. Timely diagnosis is crucial for well treating strabismus. In contrast to manual diagnosis, automatic recognition can significantly reduce labor cost and increase diagnosis efficiency. In this paper, we propose to recognize strabismus using eye-tracking data and convolutional neural networks. In particular, an eye tracker is first exploited to record a subject's eye movements. A gaze deviation (GaDe) image is then proposed to characterize the subject's eye-tracking data according to the accuracies of gaze points. The GaDe image is fed to a convolutional neural network (CNN) that has been trained on a large image database called ImageNet. The outputs of the full connection layers of the CNN are used as the GaDe image's features for strabismus recognition. A dataset containing eye-tracking data of both strabismic subjects and normal subjects is established for experiments. Experimental results demonstrate that the natural image features can be well transferred to represent eye-tracking data, and strabismus can be effectively recognized by our proposed method.

Journal ArticleDOI
TL;DR: Using deep neural networks for segmenting an MRI image of heterogeneously distributed pixels into a specific class assigning a label to each pixel is the concept of the proposed approach, and an experiment shows that the proposed method can be used to obtain reference images almost similar to the segmented ground truth images.
Abstract: Using deep neural networks for segmenting an MRI image of heterogeneously distributed pixels into a specific class assigning a label to each pixel is the concept of the proposed approach. This approach facilitates the application of the segmentation process on a preprocessed MRI image, with a trained network to be utilized for other test images. As labels are considered expensive assets in supervised training, fewer training images and training labels are used to obtain optimal accuracy. To validate the performance of the proposed approach, an experiment is conducted on other test images (available in the same database) that are not part of the training; the obtained result is of good visual quality in terms of segmentation and quite similar to the ground truth image. The average computed Dice similarity index for the test images is approximately 0.8, whereas the Jaccard similarity measure is approximately 0.6, which is better compared to other methods. This implies that the proposed method can be used to obtain reference images almost similar to the segmented ground truth images.

Journal ArticleDOI
TL;DR: Simplified and standardized processes, improved communications, and system-wide management are among the proposed improvements, which reduced patient discharge time by 54% from 216 minutes.
Abstract: Short discharge time from hospitals increases both bed availability and patients' and families' satisfaction. In this study, the Six Sigma process improvement methodology was applied to reduce patients' discharge time in a cancer treatment hospital. Data on the duration of all activities, from the physician signing the discharge form to the patient leaving the treatment room, were collected through patient shadowing. These data were analyzed using detailed process maps and cause-and-effect diagrams. Fragmented and unstandardized processes and procedures and a lack of communication among the stakeholders were among the leading causes of long discharge times. Categorizing patients by their needs enabled better design of the discharge processes. Discrete event simulation was utilized as a decision support tool to test the effect of the improvements under different scenarios. Simplified and standardized processes, improved communications, and system-wide management are among the proposed improvements, which reduced patient discharge time by 54% from 216 minutes. Cultivating the necessary ownership through stakeholder analysis is an essential ingredient of sustainable improvement efforts.

Journal ArticleDOI
TL;DR: This paper presents a meta-modelling system that automates the very labor-intensive and therefore time-heavy and expensive and expensive process of manually cataloging and cataloging medical records.
Abstract: Lithuanian University of Health Sciences, Kaunas, Lithuania European Federation for Medical Informatics-Health Information Management Europe ProRec, Berlin, Germany Department of Software Engineering, Kaunas University of Technology, Kaunas, Lithuania Department of Management and Entrepreneurship, Turku School of Economics, University of Turku, Turku, Finland Department of Computer Engineering and Mathematics, Smart Health Research Group, Universitat Rovira i Virgili, Tarragona, Catalonia, Spain

Journal ArticleDOI
TL;DR: A novel method of multiscale decision tree regression voting using SIFT-based patch features is proposed for automatic landmark detection in lateral cephalometric radiographs and experimental results show that the performance of the proposed method is satisfactory for landmark detection and measurement analysis inateral cephalograms.
Abstract: Cephalometric analysis is a standard tool for assessment and prediction of craniofacial growth, orthodontic diagnosis, and oral-maxillofacial treatment planning. The aim of this study is to develop a fully automatic system of cephalometric analysis, including cephalometric landmark detection and cephalometric measurement in lateral cephalograms for malformation classification and assessment of dental growth and soft tissue profile. First, a novel method of multiscale decision tree regression voting using SIFT-based patch features is proposed for automatic landmark detection in lateral cephalometric radiographs. Then, some clinical measurements are calculated by using the detected landmark positions. Finally, two databases are tested in this study: one is the benchmark database of 300 lateral cephalograms from 2015 ISBI Challenge, and the other is our own database of 165 lateral cephalograms. Experimental results show that the performance of our proposed method is satisfactory for landmark detection and measurement analysis in lateral cephalograms.

Journal ArticleDOI
TL;DR: This work compares three different nonautomatic segmentation algorithms in freehand three-dimensional ultrasound imaging in terms of accuracy, robustness, ease of use, level of human interaction required, and computation time.
Abstract: The thyroid is one of the largest endocrine glands in the human body, which is involved in several body mechanisms like controlling protein synthesis and the body's sensitivity to other hormones and use of energy sources. Hence, it is of prime importance to track the shape and size of thyroid over time in order to evaluate its state. Thyroid segmentation and volume computation are important tools that can be used for thyroid state tracking assessment. Most of the proposed approaches are not automatic and require long time to correctly segment the thyroid. In this work, we compare three different nonautomatic segmentation algorithms (i.e., active contours without edges, graph cut, and pixel-based classifier) in freehand three-dimensional ultrasound imaging in terms of accuracy, robustness, ease of use, level of human interaction required, and computation time. We figured out that these methods lack automation and machine intelligence and are not highly accurate. Hence, we implemented two machine learning approaches (i.e., random forest and convolutional neural network) to improve the accuracy of segmentation as well as provide automation. This comparative study intends to discuss and analyse the advantages and disadvantages of different algorithms. In the last step, the volume of the thyroid is computed using the segmentation results, and the performance analysis of all the algorithms is carried out by comparing the segmentation results with the ground truth.