scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Medical Imaging and Health Informatics in 2019"


Journal ArticleDOI
TL;DR: A high-performance multiple sclerosis classification model that employs transferred learning to adapt AlexNet to classify multiple sclerosis brain image in the authors' task is originated, and its performance is better than seven state-of-the-artmultiple sclerosis classification approaches.
Abstract: Aim: We originated a high-performance multiple sclerosis classification model in this study. Method: The dataset was segmented into training, validation, and test sets. We used AlexNet as the basis model, and employed transferred learning to adapt AlexNet to classify multiple sclerosis brain image in our task. We tested different settings of transfer learning, i.e., how many layers were transferred and how many layers were replaced. The learning rate of replaced layers are 10 times of that of transferred layer. We compare the results using five measures: sensitivity, specificity, precision, accuracy and F1 score. Results: We found replacing the FC_8 block in original AlexNet can procure the best performance: a sensitivity of 98.12%, a specificity of 98.22%, an accuracy of 98.17%, a precision of 98.21%, and an F1 score of 98.15%. Conclusions: Our performance is better than seven state-of-the-art multiple sclerosis classification approaches.

78 citations



Journal ArticleDOI
TL;DR: This research presents a novel and scalable approach to integrate nanofiltration and X-ray diffraction analysis for high-performance liquid chromatography of Na6(CO3)(SO4) levels.
Abstract: 1The State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China 2The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China 3School of Mechanical and Aerospace Engineering, Nanyang Technological University, 639798, Singapore 4School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China

39 citations




Journal ArticleDOI
TL;DR: The experimental results depict that PLOA outperforms than LOA and other well-known approaches, considering the five benchmark cancer gene expression dataset, and returns 99% classification accuracy for the dataset namely Prostate, Lung, Leukemia and Central Nervous system (CNS) for top 200 genes.
Abstract: In the field of bioinformatics research, a large volume of genetic data has been generated. Availability of higher throughput devices at lower cost has contributed to this generation of huge volumetric data. Handling such numerous data has become extremely challenging for selecting the relevant disease-causing gene. The development of microarray technology provides higher chances of cancer diagnosis, by enabling to measure the expression level of multiple genes at the same stretch. Selecting the relevant gene by using classifiers for investigation of gene expression data is a complicated process. Proper identification of gene from the gene expression datasets plays a vital role in improving the accuracy of classification. In this article, identification of the highly relevant gene from the gene expression data for cancer treatment is discussed in detail. By using modified meta-heuristic approach, known as 'parallel lion optimization' (PLOA) for selecting genes from microarray data that can classify various cancer sub-types with more accuracy. The experimental results depict that PLOA outperforms than LOA and other well-known approaches, considering the five benchmark cancer gene expression dataset. It returns 99% classification accuracy for the dataset namely Prostate, Lung, Leukemia and Central Nervous system (CNS) for top 200 genes. Prostate and Lymphoma dataset PLOA is 99.19% and 99.93% respectively. On evaluating the result with other algorithm, the higher level of accuracy in gene selection is achieved by the proposed algorithm.

16 citations


Journal ArticleDOI
TL;DR: An improved deep CNN model combined with U-Net and Dense-Net structure combined with CLAHE algorithm is used for image preprocessing, which can reduce the image noise and enhance tiny retinal blood vessels features.
Abstract: Retinal blood vessel feature is one of crucial biomarkers for ophthalmologic and cardiovascular diseases, efficiency image segmentation technologies will help doctors diagnose these related diseases. We propose an improved deep CNN model to segment retinal blood vessels. Our method includes three steps: Data augmentation, Image preprocessing methods and Model training. The data augmentation uses the rotation and image mirroring to make the training image better generalization. The CLAHE algorithm is used for image preprocessing, which can reduce the image noise and enhance tiny retinal blood vessels features. Finally, we used a deep CNN model combined with U-Net and Dense-Net structure to train retinal blood vessel image. The result of proposed model was tested on public available dataset DRIVE, achieving an average accuracy 0.951, specificity 0.973, sensitivity 0.797 and the average AUC is 0.885. The results show its potential for clinical application.

12 citations


Journal ArticleDOI
TL;DR: A novel automatic classification method for arrhythmia that achieved an average accuracy, specificity and sensitivity of 99.08%, 99.00% and 89.31% using the intra-patient beats, and 92.31%, 89.98% and 37.47%, respectively, using the inter- patient beats were validated.
Abstract: Cardiovascular diseases have become more and more prominent in recent years, which have proven to be a major threat to people's health. Accurate detection of arrhythmia in patients has important implications for clinical treatment. The aim of this study was to propose a novel automatic classification method for arrhythmia in order to improve classification accuracy. The electrocardiogram (ECG) signal was subjected preprocessing for denoising purposes using a wavelet transform. Then, the local and global characteristics of the beat, which contained RR interval features according with the clinical diagnosis criterion, morphology features based on wavelet packet decomposition and statistical features along with kurtosis coefficient, skewness coefficient and variance are exploited and fused. Meanwhile, the dimensionality of wavelet packet coefficients were reduced via principal component analysis (PCA). Finally, these features were used as the input of the random forest classifier to train the model and were then compared with the support vector machine (SVM) and back propagation (BP) neural networks. Based on 100,647 beats from the MIT-BIH database, the proposed method achieved an average accuracy, specificity and sensitivity of 99.08%, 99.00% and 89.31%, respectively, using the intra-patient beats, and 92.31%, 89.98% and 37.47%, respectively, using the inter-patient beats. Moreover, two classification schemes, namely, inter-patient and intra-patient scheme, were validated. Compared with the other methods referred to in this paper, the performance of the novel method yielded better results.

11 citations


Journal ArticleDOI
TL;DR: PEM assesses the similarity between the user's profile and query before posting to WSE and assists the user in avoiding privacy exposure, and offers more privacy to the user even in case of machine-learning attack.
Abstract: The increasing use of web search engines (WSEs) for searching healthcare information has resulted in a growing number of users posting personal health information online. A recent survey demonstrates that over 80% of patients use WSE to seek health information. However, WSE stores these user's queries to analyze user behavior, result ranking, personalization, targeted advertisements, and other activities. Since health-related queries contain privacy-sensitive information that may infringe user's privacy. Therefore, privacy-preserving web search techniques such as anonymizing networks, profile obfuscation, private information retrieval (PIR) protocols etc. are used to ensure the user's privacy. In this paper, we propose Privacy Exposure Measure (PEM), a technique that facilitates user to control his/her privacy exposure while using the PIR protocols. PEM assesses the similarity between the user's profile and query before posting to WSE and assists the user in avoiding privacy exposure. The experiments demonstrate 37.2% difference between users' profile created through PEM-powered-PIR protocol and other usual users' profile. Moreover, PEM offers more privacy to the user even in case of machine-learning attack.

10 citations



Journal ArticleDOI
TL;DR: The results show that OSLo performs better than the benchmark privacy-preserving protocol on the basis of privacy and delay and depicts that the privacy of a user depends on the size of the group.
Abstract: Users around the world send queries to the Web Search Engine (WSE) to retrieve data from the Internet. Users usually take primary assistance relating to medical information from WSE via search queries. The search queries relating to diseases and treatment is contemplated to be the most personal facts about the user. The search queries often contain identifiable information that can be linked back to the originator, which can compromise the privacy of a user. In this work, we are proposing a distributed privacy-preserving protocol (OSLo) that eliminates limitation in the existing distributed privacy-preserving protocols and a framework, which evaluates the privacy of a user. The OSLo framework asses the local privacy relative to the group of users involved in forwarding query to the WSE and the profile privacy against the profiling of WSE. The privacy analysis shows that the local privacy of a user directly depends on the size of the group and inversely on the number of compromised users. We have performed experiments to evaluate the profile privacy of a user using a privacy metric Profile Exposure Level. The OSLo is simulated with a subset of 1000 users of the AOL query log. The results show that OSLo performs better than the benchmark privacy-preserving protocol on the basis of privacy and delay. Additionally, results depict that the privacy of a user depends on the size of the group.


Journal ArticleDOI
TL;DR: The Akima Spline Interpolation based Ensemble Empirical Mode Kalman Filter Decomposition (ASI-EEMKFD) model proposed in the paper focuses on detecting seizures automatically through a stable algorithm written in Python by using PyEEG package.
Abstract: Epilepsy is an incessant neurological disorder. The Epilepsy seizures are generated due to the aggravation in transient signals in Cerebrum. These seizures can be detected by analyzing the Electroencephalogram (EEG) Signals. The Akima Spline Interpolation based Ensemble Empirical Mode Kalman Filter Decomposition (ASI-EEMKFD) model proposed in the paper focuses on detecting seizures automatically through a stable algorithm written in Python by using PyEEG package. The signal detection process is done in three phases. First, the EEG signals are acquired through data sets. Then the signal is decomposed using Akima Spline interpolation for finding the intrinsic mode function. Further the signal is decomposed by implementing the steps involved in the Ensemble Empirical Mode Decomposition (EEMD). During the decomposition Kalman filter is used in order to remove the white Gaussian noise. Finally, the decomposed signals are applied to the Long Term Short Term Memory (LTST) deep learning classifier which classifies the ictal, pre-ictal and healthy signal. Our proposed method produces the result higher compared with the existing EEMD Methods with the accuracy rate of 98.2%, sensitivity of 94.96% and specificity of 93.72%.





Journal ArticleDOI
TL;DR: This paper has proposed a framework for the data collection, modeling, and visualization of the health related patterns, and a comparative analysis is carried among the baseline method and four classification algorithms which include Naive Bayes, Support Vector Machine, SVM, Logistic Regression, and Decision Tree.
Abstract: The trend of news transmission is rapidly shifting from electronic media to social media. Currently, news channels in general, while health news channels specifically send health related news on social media sites. These news are beneficial for the patients, medical professionals and the general public. A lot of health related data is available on the social media that may be used to extract significant information and present several predictions from it to assist physicians, patients and healthcare organizations for decision making. However, A little research is found on health news data using machine learning approaches, thus in this paper, we have proposed a framework for the data collection, modeling, and visualization of the health related patterns. For the analysis, the tweets of 13 news channels are collected from the Twitter. The dataset holds approximately 28k tweets available under 280 hashtags. Furthermore, a comprehensive set of experiments are performed to extract patterns from the data. A comparative analysis is carried among the baseline method and four classification algorithms which include Naive Bayes (NB), Support Vector Machine (SVM), Logistic Regression (LR), Decision Tree (J48). For the evaluation of the results, the standard measures accuracy, precision, recall and f-measure have been used. The results of the study are encouraging and better than the other studies of such kind.

Journal ArticleDOI
TL;DR: A census-based efficient implementation stereo algorithm for medical imaging based on census transform that simplifies the calculation process and improves the efficiency, the moving window and memory organization optimized techniques are used.
Abstract: Compared to most conventional efficient stereo matching algorithms that based on NCC (Normalized Cross-Correlation) or SAD (Sum of Absolute Difference), stereo matching based on census transform is robust to radiometric distortion. Thus, in the paper we propose a census-based efficient implementation stereo algorithm for medical imaging. Firstly, census-based stereo matching is investigated, and its specific implementation process is analyzed in detail. Secondly, in order to simplify the calculation process and improve the efficiency, the moving window and memory organization optimized techniques are used. The program runs on standard PC hardware utilizing various SSE2 instructions. Finally, stereo matching of four standard image pairs on the Middlebury image datasets and a paired cervical images obtained from clinical colposcope are implemented in an efficient way. The experimental results on simulated and real medical images prove the effectiveness of the method for the computational cost.


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a DTI denoising algorithm based on Riemannian geometric framework and sparse Bayesian learning to remove the Rician noise and preserve the diffusion tensor geometry of DTI.
Abstract: Diffusion tensor imaging (DTI) is a special type of magnetic resonance imaging (MRI), which is the only noninvasive method that can effectively observe and trace the white matter fiber bundles of the brain. In the imaging process, the signal-to-noise ratio (SNR) of the MR image is low due to the influence of Rician noise. And it leads to processing difficultly by existing algorithms, which limits the development of DTI in clinical applications. In order to remove the Rician noise and preserve the diffusion tensor geometry of DTI, we propose a DTI denoising algorithm based on Riemannian geometric framework and sparse Bayesian learning. Firstly, DTI tensor is mapped to the Riemannian manifold to preserve the structural properties of the tensor. And then, sparse Bayesian learning method is used to reconstruct the noise-free DTI. The experimental results for synthetic or real DTI data show that the proposed algorithm effectively removes the Rician noise in the DTI as well as preserves the nonlinear structure of the DTI. Comparing with the existing denoising algorithm, the proposed algorithm has better denoising performance.








Journal ArticleDOI
TL;DR: The results showed that the Support Vector Machine demonstrated the best classification accuracy when a subset of features was used, and may provide useful reference values to the development of an automatic cervical cancer screening tool.
Abstract: Cervical cancer represents a major cause of death for women. Automatic classification of cervical images from acetic acid test could serve as a promising screening tool for cervical cancer. Despite an increasing volume of studies on automatic classification of cervical images, reported methods varied markedly in terms of features and classifiers used, and therefore the performance. The classification performance using different configurations of the classifier has not been well characterized. The objective of this study was to evaluate several frequently used features and classifiers in acetic-acid cervical image based cervical intraepithelial neoplasia classification. Seven typically used color or texture-based features and four frequently used classifiers (Support Vector Machine, Random Forest, Back-Propagation Neural Network and K-Nearest Neighbors) were included in the comparison based on a balanced large sample size including 175 CIN negative and 175 CIN positive patients. The results showed that the Support Vector Machine demonstrated the best classification accuracy when a subset of features was used. The finding of this study may provide useful reference values to the development of an automatic cervical cancer screening tool.