scispace - formally typeset
Search or ask a question

Showing papers in "Medical & Biological Engineering & Computing in 2020"


Journal ArticleDOI
TL;DR: A novel automated convolutional neural network architecture for a multiclass classification system based on spectral-domain optical coherence tomography (SD-OCT) has been proposed and is a potentially impactful tool for the diagnosis of retinal diseases using SD- OCT images.
Abstract: Since introducing optical coherence tomography (OCT) technology for 2D eye imaging, it has become one of the most important and widely used imaging modalities for the noninvasive assessment of retinal eye diseases. Age-related macular degeneration (AMD) and diabetic macular edema eye disease are the leading causes of blindness being diagnosed using OCT. Recently, by developing machine learning and deep learning techniques, the classification of eye retina diseases using OCT images has become quite a challenge. In this paper, a novel automated convolutional neural network (CNN) architecture for a multiclass classification system based on spectral-domain optical coherence tomography (SD-OCT) has been proposed. The system used to classify five types of retinal diseases (age-related macular degeneration (AMD), choroidal neovascularization (CNV), diabetic macular edema (DME), and drusen) in addition to normal cases. The proposed CNN architecture with a softmax classifier overall correctly identified 100% of cases with AMD, 98.86% of cases with CNV, 99.17% cases with DME, 98.97% cases with drusen, and 99.15% cases of normal with an overall accuracy of 95.30%. This architecture is a potentially impactful tool for the diagnosis of retinal diseases using SD-OCT images.

80 citations


Journal ArticleDOI
TL;DR: A DICOM image encryption based upon chaotic attractors on frequency domain by integer wavelet transform (IWT) and fused with deoxyribonucleic acid (DNA) sequence on the spatial domain and robust against the brute force attacks is recommended.
Abstract: Today’s technological era, the booming desire for e-healthcare has inflated the attention towards the security of data from cyber attacks. As the digital medical images are transferred over the public network, there is a demand to shield an adequate level of protection. One of the prominent techniques is encryption which secures the medical images. This paper recommends a DICOM image encryption based upon chaotic attractors on frequency domain by integer wavelet transform (IWT) and fused with deoxyribonucleic acid (DNA) sequence on the spatial domain. The proposed algorithm uses a chaotic 3D Lorenz attractor and logistic map to generate pseudo-random keys for encryption. The algorithm involves subsequent stages, i.e. permutation, substitution, encoding, complementary and decoding. To endorse the resistance of the proposed algorithm, various analyses have been examined for 256 × 256 DICOM images by achieving an average entropy of 7.99, larger keyspace of 10238 and non-zero correlation. The overall results confirm that the proposed algorithm is robust against the brute force attacks.

78 citations


Journal ArticleDOI
TL;DR: This review intends to bring out the process of image fusion, its utilization in the medical domain, merits, and demerits, and discusses the involvement of various medical entities like medical resonance imaging, positron emission tomography, and computed tomography.
Abstract: An image fusion based on multimodal medical images renders a considerable enhancement in the quality of fused images. An effective image fusion technique produces output images by preserving all the viable and prominent information gathered from the source images without any introduction of flaws or unnecessary distortions. This review paper intends to bring out the process of image fusion, its utilization in the medical domain, merits, and demerits and reviews the perspective of multimodal medical image fusion. It also discusses the involvement of various medical entities like medical resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). The usefulness of such modalities is presented, suggesting plausible hybrid modality combinations which could greatly enhance image fusion. This review also discusses innovative dispositions in the medical image fusion techniques for the achievement of incisively desired, quality images focused on fusion with wavelet transform and use of independent component analysis (ICA) and principal component analysis (PCA) techniques for the purpose denoising and data dimension reductions. Additionally, the future-prospects of an ideal technique for medical image fusion through the utilization of various medical modalities have been also discussed in this review paper. Graphical abstract.

71 citations


Journal ArticleDOI
TL;DR: In inertial sensor data—linear acceleration and angular rate—was simulated from a database of optical motion tracking data and used as input for a feedforward and long short-term memory neural network to predict the joint angles and moments of the lower limbs during gait.
Abstract: In recent years, gait analysis outside the laboratory attracts more and more attention in clinical applications as well as in life sciences. Wearable sensors such as inertial sensors show high potential in these applications. Unfortunately, they can only measure kinematic motions patterns indirectly and the outcome is currently jeopardized by measurement discrepancies compared with the gold standard of optical motion tracking. The aim of this study was to overcome the limitation of measurement discrepancies and the missing information on kinetic motion parameters using a machine learning application based on artificial neural networks. For this purpose, inertial sensor data—linear acceleration and angular rate—was simulated from a database of optical motion tracking data and used as input for a feedforward and long short-term memory neural network to predict the joint angles and moments of the lower limbs during gait. Both networks achieved mean correlation coefficients higher than 0.80 in the minor motion planes, and correlation coefficients higher than 0.98 in the sagittal plane. These results encourage further applications of artificial intelligence to support gait analysis.

68 citations


Journal ArticleDOI
TL;DR: A new blood cell image classification framework which is based on a deep convolutional generative adversarial network (DC-GAN) and a residual neural network (ResNet) and introduced a new loss function which is improved the discriminative power of the deeply learned features.
Abstract: In medicine, white blood cells (WBCs) play an important role in the human immune system. The different types of WBC abnormalities are related to different diseases so that the total number and classification of WBCs are critical for clinical diagnosis and therapy. However, the traditional method of white blood cell classification is to segment the cells, extract features, and then classify them. Such method depends on the good segmentation, and the accuracy is not high. Moreover, the insufficient data or unbalanced samples can cause the low classification accuracy of model by using deep learning in medical diagnosis. To solve these problems, this paper proposes a new blood cell image classification framework which is based on a deep convolutional generative adversarial network (DC-GAN) and a residual neural network (ResNet). In particular, we introduce a new loss function which is improved the discriminative power of the deeply learned features. The experiments show that our model has a good performance on the classification of WBC images, and the accuracy reaches 91.7%. Graphical Abstract Overview of the proposed method, we use the deep convolution generative adversarial networks (DC-GAN) to generate new samples that are used as supplementary input to a ResNet, the transfer learning method is used to initialize the parameters of the network, the output of the DC-GAN and the parameters are applied the final classification network. In particular, we introduced a modified loss function for classification to increase inter-class variations and decrease intra-class differences.

65 citations


Journal ArticleDOI
TL;DR: Results show that the proposed methodology is a promising and robust CADx system for breast cancer classification and the deep ensemble extracting the robust features with the final classification using neural networks.
Abstract: Breast cancer has the second highest frequency of death rate among women worldwide. Early-stage prevention becomes complex due to reasons unknown. However, some typical signatures like masses and micro-calcifications upon investigating mammograms can help diagnose women better. Manual diagnosis is a hard task the radiologists carry out frequently. For their assistance, many computer-aided diagnosis (CADx) approaches have been developed. To improve upon the state of the art, we proposed a deep ensemble transfer learning and neural network classifier for automatic feature extraction and classification. In computer-assisted mammography, deep learning-based architectures are generally not trained on mammogram images directly. Instead, the images are pre-processed beforehand, and then they are adopted to be given as input to the ensemble model proposed. The robust features extracted from the ensemble model are optimized into a feature vector which are further classified using the neural network (nntraintool). The network was trained and tested to separate out benign and malignant tumors, thus achieving an accuracy of 0.88 with an area under curve (AUC) of 0.88. The attained results show that the proposed methodology is a promising and robust CADx system for breast cancer classification. Graphical Abstract Flow diagram of the proposed approach. Figure depicts the deep ensemble extracting the robust features with the final classification using neural networks.

52 citations


Journal ArticleDOI
TL;DR: This work will contribute to the development of more accurate surface EMG-based motor decoding systems for the control prosthetic hands by determining new configurations of features and classifiers to improve the accuracy and response time of prosthetics control.
Abstract: Myoelectric pattern recognition (MPR) to decode limb movements is an important advancement regarding the control of powered prostheses. However, this technology is not yet in wide clinical use. Improvements in MPR could potentially increase the functionality of powered prostheses. To this purpose, offline accuracy and processing time were measured over 44 features using six classifiers with the aim of determining new configurations of features and classifiers to improve the accuracy and response time of prosthetics control. An efficient feature set (FS: waveform length, correlation coefficient, Hjorth Parameters) was found to improve the motion recognition accuracy. Using the proposed FS significantly increased the performance of linear discriminant analysis, K-nearest neighbor, maximum likelihood estimation (MLE), and support vector machine by 5.5%, 5.7%, 6.3%, and 6.2%, respectively, when compared with the Hudgins' set. Using the FS with MLE provided the largest improvement in offline accuracy over the Hudgins feature set, with minimal effect on the processing time. Among the 44 features tested, logarithmic root mean square and normalized logarithmic energy yielded the highest recognition rates (above 95%). We anticipate that this work will contribute to the development of more accurate surface EMG-based motor decoding systems for the control prosthetic hands.

47 citations


Journal ArticleDOI
TL;DR: A robust and accurate technique for the automatic detection of mitoses from histological breast cancer slides using the multi-task deep learning framework for object detection and instance segmentation Mask RCNN, which outperforms all state-of-the-art mitosis detection approaches on the 2014 ICPR dataset.
Abstract: Counting the mitotic cells in histopathological cancerous tissue areas is the most relevant indicator of tumor grade in aggressive breast cancer diagnosis. In this paper, we propose a robust and accurate technique for the automatic detection of mitoses from histological breast cancer slides using the multi-task deep learning framework for object detection and instance segmentation Mask RCNN. Our mitosis detection and instance segmentation framework is deployed for two main tasks: it is used as a detection network to perform mitosis localization and classification in the fully annotated mitosis datasets (i.e., the pixel-level annotated datasets), and it is used as a segmentation network to estimate the mitosis mask labels for the weakly annotated mitosis datasets (i.e., the datasets with centroid-pixel labels only). We evaluate our approach on the fully annotated 2012 ICPR grand challenge dataset and the weakly annotated 2014 ICPR MITOS-ATYPIA challenge dataset. Our evaluation experiments show that we can obtain the highest F-score of 0.863 on the 2012 ICPR dataset by applying the mitosis detection and instance segmentation model trained on the pixel-level labels provided by this dataset. For the weakly annotated 2014 ICPR dataset, we first employ the mitosis detection and instance segmentation model trained on the fully annotated 2012 ICPR dataset to segment the centroid-pixel annotated mitosis ground truths, and produce the mitosis mask and bounding box labels. These estimated labels are then used to train another mitosis detection and instance segmentation model for mitosis detection on the 2014 ICPR dataset. By adopting this two-stage framework, our method outperforms all state-of-the-art mitosis detection approaches on the 2014 ICPR dataset by achieving an F-score of 0.475. Moreover, we show that the proposed framework can also perform unsupervised mitosis detection through the estimation of pseudo labels for an unlabeled dataset and it can achieve promising detection results. Code has been made available at: https://github.com/MeriemSebai/MaskMitosis. Graphical Abstract Overview of MaskMitosis framework.

46 citations


Journal ArticleDOI
TL;DR: A new deep convolutional neural network (CNN) architecture is designed to achieve the classification task of ILD patterns and a novel two-stage transfer learning (TSTL) method is proposed to deal with the problem of the lack of training data.
Abstract: Interstitial lung disease (ILD) refers to a group of various abnormal inflammations of lung tissues and early diagnosis of these disease patterns is crucial for the treatment. Yet it is difficult to make an accurate diagnosis due to the similarity among the clinical manifestations of these diseases. In order to assist the radiologists, computer-aided diagnosis systems have been developed. Besides, the potential of deep convolutional neural networks (CNNs) is also expected to exert on the medical image analysis in recent years. In this paper, we design a new deep convolutional neural network (CNN) architecture to achieve the classification task of ILD patterns. Furthermore, we also propose a novel two-stage transfer learning (TSTL) method to deal with the problem of the lack of training data, which leverages the knowledge learned from sufficient textural source data and auxiliary unlabeled lung CT data to the target domain. We adopt the unsupervised manner to learn the unlabeled data, by which the objective function composed of the prediction confidence and mutual information are optimized. The experimental results show that our proposed CNN architecture achieves desirable performance and outperforms most of the state-of-the-art ones. The comparative analysis demonstrates the promising feasibility and advantages of the proposed two-stage transfer learning strategy as well as the potential of the knowledge learning from lung CT data. Graphical Abstract The framework of the proposed two-stage transfer learning method.

40 citations


Journal ArticleDOI
TL;DR: The vascular difficulty level is set as an objective index combined with operating characteristics extracted from the operations performed by surgeons to evaluate the surgical operation skills at the aortic arch using machine learning and the accuracy of the assessment improves from 86.67 to 96.67%.
Abstract: An accurate assessment of surgical operation skills is essential for improving the vascular intervention surgical outcome and the performance of endovascular surgery robots. In existing studies, subjective and objective assessments of surgical operation skills use a variety of indicators, such as the operation speed and operation smoothness. However, the vascular conditions of particular patients have not been considered in the assessment, leading to deviations in the evaluation. Therefore, in this paper, an operation skills assessment method including the vascular difficulty level index for catheter insertion at the aortic arch in endovascular surgery is proposed. First, the model describing the difficulty of the vascular anatomical structure is established with characteristics of different aortic arch branches based on machine learning. Afterwards, the vascular difficulty level is set as an objective index combined with operating characteristics extracted from the operations performed by surgeons to evaluate the surgical operation skills at the aortic arch using machine learning. The accuracy of the assessment improves from 86.67 to 96.67% after inclusion of the vascular difficulty as an evaluation indicator to more objectively and accurately evaluate skills. The method described in this paper can be adopted to train novice surgeons in endovascular surgery, and for studies of vascular interventional surgery robots. Graphical abstract Operation skill assessment with vascular difficulty for vascular interventional surgery.

40 citations


Journal ArticleDOI
Yue Zhang1, Shuai Yu1
TL;DR: A novel single-lead noninvasive fetal electrocardiogram extraction method based on the technique of clustering and PCA which is feasible and reliable to detect fetal heart rate and extract FECG.
Abstract: Early detection of potential hazards in the fetal physiological state during pregnancy and childbirth is very important. Noninvasive fetal electrocardiogram (FECG) can be extracted from the maternal abdominal signal. However, due to the interference of maternal electrocardiogram and other noises, the task of extraction is challenging. This paper introduces a novel single-lead noninvasive fetal electrocardiogram extraction method based on the technique of clustering and PCA. The method is divided into four steps: (1) pre-preprocessing; (2) fetal QRS complexes and maternal QRS complexes detection based on k-means clustering algorithm with the feature of max-min pairs; (3) FQRS correction step is to improve the performance of step two; (4) template subtraction based on PCA is introduced to extract FECG waveform. To verify the performance of the proposed algorithm, two clinical open-access databases are used to check the performance of FQRS detection. As a result, the method proposed shows the average PPV of 95.35%, Se of 96.23%, and F1-measure of 95.78%. Furthermore, the robustness test is carried out on an artificial database which proves that the algorithm has certain robustness in various noise environments. Therefore, this method is feasible and reliable to detect fetal heart rate and extract FECG. Graphical abstract Early detection of potential hazards in the fetal physiological state during pregnancy and childbirth is very important. Noninvasive fetal electrocardiogram (FECG) can be extracted from maternal abdominal signal. However, due to the interference of maternal electrocardiogram and other noises, the task of extraction is challenging. This paper introduces a novel single-lead noninvasive fetal electrocardiogram extraction method based on the technique of clustering and PCA. The method is divided into four steps: (1) pre-preprocessing; (2) fetal QRS complexes and maternal QRS complexes detection based on k-means clustering algorithm with the feature of max-min pairs; (3) FQRS correction step is to improve the performance of step two; (4) template subtraction based on PCA is introduced to extract FECG waveform. To verify the performance of algorithm, two clinical open-access databases are used to check the performance of FQRS detection. As a result, the method proposed shows the average PPV of 95.35%, Se of 96.23%, and F1-measure of 95.78%. Furthermore, the robustness test is carried out on an artificial database which proves that the algorithm has certain robustness in various noise environments. Therefore, this method is feasible and reliable to detect fetal heart rate and extract FECG.

Journal ArticleDOI
TL;DR: Surgical task performance evaluation in an endovascular evaluator (EVE) is conducted, and the results indicate that the proposed detection method breaks through the axial measuring range limitation of the previous marker-based detection method.
Abstract: Master-slave endovascular interventional surgery (EIS) robots have brought revolutionary advantages to traditional EIS, such as avoiding X-ray radiation to the surgeon and improving surgical precision and safety. However, the master controllers of most of the current EIS robots always lead to bad human-machine interaction, because of the difference in nature between the rigid operating handle and the flexible medical catheter used in EIS. In this paper, a noncontact detection method is proposed, and a novel master controller is developed to realize real-time detection of surgeon's operation without interference to the surgeon. A medical catheter is used as the operating handle. It is enabled by using FAST corner detection algorithm and optical flow algorithm to track the corner points of the continuous markers on a designed sensing pipe. A mathematical model is established to calculate the axial and rotational motion of the sensing pipe according to the moving distance of the corner points in image coordinates. A master-slave EIS robot system is constructed by integrating the proposed master controller and a developed slave robot. Surgical task performance evaluation in an endovascular evaluator (EVE) is conducted, and the results indicate that the proposed detection method breaks through the axial measuring range limitation of the previous marker-based detection method. In addition, the rotational detection error is reduced by 92.5% compared with the previous laser-based detection method. The results also demonstrate the capability and efficiency of the proposed master controller to control the slave robot for surgical task implementation. Graphical abstract A novel master controller is developed to realize real-time noncontact detection of surgeon's operation without interference to the surgeon. The master controller is used to remotely control the slave robot to implement certain surgical tasks.

Journal ArticleDOI
TL;DR: It is seen that a fully automatic hybrid system, which uses the group sparsity to enhance segmentation performance and the Mobile-Net to obtain high-level robust features, can be an effective mobile solution for the sperm morphology analysis problem.
Abstract: Sperm morphology, as an indicator of fertility, is a critical tool in semen analysis. In this study, a smartphone-based hybrid system that fully automates the sperm morphological analysis is introduced with the aim of eliminating unwanted human factors. Proposed hybrid system consists of two progressive steps: automatic segmentation of possible sperm shapes and classification of normal/ab-normal sperms. In the segmentation step, clustering techniques with/without group sparsity approach were tested to extract region of interests from the images. Subsequently, a novel publicly available morphological sperm image data set, whose labels were identified by experts as non-sperm, normal and abnormal sperm, was created as the ground truths of classification step. In the classification step, conventional and ensemble machine learning methods were applied to domain-specific features that were extracted by using wavelet transform and descriptors. Additionally, as an alternative to conventional features, three deep neural network architectures, which can extract high-level features from raw images after using statistical learning, were employed to increase the proposed method's performance. The results show that, for the conventional features, the highest classification accuracies were achieved as 80.5% and 83.8% by using the wavelet- and descriptor-based features that were fed to the Support Vector Machines respectively. On the other hand, the Mobile-Net, which is a very convenient network for smartphones, achieved 87% accuracy. In the light of obtained results, it is seen that a fully automatic hybrid system, which uses the group sparsity to enhance segmentation performance and the Mobile-Net to obtain high-level robust features, can be an effective mobile solution for the sperm morphology analysis problem. A fully automated hybrid human sperm detection and classification system based on mobile-net.

Journal ArticleDOI
TL;DR: A patient-specific computational framework is developed to virtually simulate TAVI in stenotic BAV patients using the Edwards S3 and its improved version SAPIEN 3 Ultra and quantify stent frame deformity as well as the severity of paravalvular leakage (PVL).
Abstract: Bicuspid aortic valve (BAV) anatomy has routinely been considered an exclusion in the setting of transcatheter aortic valve implantation (TAVI) because of the large dimension of the aortic annulus having a more calcified, bulky, and irregular shape. The study aims to develop a patient-specific computational framework to virtually simulate TAVI in stenotic BAV patients using the Edwards SAPIEN 3 valve (S3) and its improved version SAPIEN 3 Ultra and quantify stent frame deformity as well as the severity of paravalvular leakage (PVL). Specifically, the aortic root anatomy of n.9 BAV patients who underwent TAVI was reconstructed from pre-operative CT imaging. Crimping and deployment of S3 frame were performed and then followed by fluid-solid interaction analysis to simulate valve leaflet dynamics throughout the entire cardiac cycle. Modeling revealed that the S3 stent frame expanded well on BAV anatomy with an elliptical shape at the aortic annulus. Comparison of predicted S3 deformity as assessed by eccentricity and expansion indices demonstrated a good agreement with the measurement obtained from CT imaging. Blood particle flow analysis demonstrated a backward blood jet during diastole, whereas the predicted PVL flows corresponded well with those determined by transesophageal echocardiography. This study represents a further step towards the use of personalized simulations to virtually plan TAVI, aiming at improving not only the efficacy of the implantation but also the exploration of "off-label" applications as the TAVI in the setting of BAV patients. Graphical abstract Computational frameworks of TAVI in patients with stenotic bicuspid aortic valve.

Journal ArticleDOI
TL;DR: The proposed method can classify normal and abnormal heart sounds with efficiency and high accuracy and was evaluated on the heart sound public dataset provided by the PhysioNet Computing in Cardiology Challenge 2016.
Abstract: We purpose a novel method that combines modified frequency slice wavelet transform (MFSWT) and convolutional neural network (CNN) for classifying normal and abnormal heart sounds. A hidden Markov model is used to find the position of each cardiac cycle in the heart sound signal and determine the exact position of the four periods of S1, S2, systole, and diastole. Then the one-dimensional cardiac cycle signal was converted into a two-dimensional time-frequency picture using the MFSWT. Finally, two CNN models are trained using the aforementioned pictures. We combine two CNN models using sample entropy (SampEn) to determine which model is used to classify the heart sound signal. We evaluated our model on the heart sound public dataset provided by the PhysioNet Computing in Cardiology Challenge 2016. Experimental classification performance from a 10-fold cross-validation indicated that sensitivity (Se), specificity (Sp) and mean accuracy (MAcc) were 0.95, 0.93, and 0.94, respectively. The results showed the proposed method can classify normal and abnormal heart sounds with efficiency and high accuracy. Graphical abstract Block diagram of heart sound classification.

Journal ArticleDOI
TL;DR: The experiment has shown that the performance of the proposed new feature selection method combined with twin-bounded support vector machine (FSTBSVM) is very efficient and capable of producing good results with fewer features than the original data sets.
Abstract: Early diagnosis and treatment are the most important strategies to prevent deaths from several diseases. In this regard, data mining and machine learning techniques have been useful tools to help minimize errors and to provide useful information for diagnosis. Our paper aims to present a new feature selection algorithm. In order to validate our study, we used eight benchmark data sets which are commonly used among researchers who developed machine learning methods for medical data classification. The experiment has shown that the performance of our proposed new feature selection method combined with twin-bounded support vector machine (FSTBSVM) is very efficient. The robustness of the FSTBSVM is examined using classification accuracy, analysis of sensitivity, and specificity. The proposed FSTBSVM is a very promising technique for classification, and the results show that the proposed method is capable of producing good results with fewer features than the original data sets.

Journal ArticleDOI
TL;DR: Nanostructured TCP-wollastonite-zirconia (TCP-WS-Zr) coatings reduced the duration of implant fixation next to the hardened tissue, and increased the bone regeneration due to its structure and dimensions of the nanometric phases of the forming phases.
Abstract: Similar to metallic implant, using the compact bio-nanocomposite can provide a suitable strength due to its high stiffness and providing sufficient adhesion between bone and orthopedic implant. Therefore, using zirconia-reinforced calcium phosphate composites with new generation of calcium silicate composites was considered in this study. Additionally, investigation of microstructure, apatite formation, and mechanical characteristic of synthetic compact bio-nanocomposite bones was performed. Desired biodegradation, optimal bioactivity, and dissolution of tricalcium phosphate (TCP) were controlled to optimize its mechanical properties. The purpose of this study was to prepare the nanostructured TCP-wollastonite-zirconia (TCP-WS-Zr) using the space holder (SH) technique. The X-ray diffraction technique (XRD) was used to confirm the existence of favorable phases in the composite’s structure. Additionally, the effects of calcination temperature on the fuzzy composition, grain size, powder crystallinity, and final coatings were investigated. Furthermore, the Fourier-transform infrared spectroscopy (FTIR) was used for fundamental analysis of the resulting powder. In order to examine the shape and size of powder’s particles, particle size analysis was performed. The morphology and microstructure of the sample’s surface was studied by scanning electron microscopy (SEM), and to evaluate the dissolution rate, adaptive properties, and the comparison with the properties of single-phase TCP, the samples were immersed in physiological saline solution (0.9% sodium chloride) for 21 days. The results of in vivo evaluation illustrated an increase in the concentration of calcium ion release and proper osseointegration ratio, and the amount of calcium ion release in composite coatings was lower than that in TCP single phase. Nanostructured TCP-WS-Zr coatings reduced the duration of implant fixation next to the hardened tissue, and increased the bone regeneration due to its structure and dimensions of the nanometric phases of the forming phases. Finally, the animal evaluation shows that the novel bio-nanocomposite has increasing trend in healing of defected bone after 1 month.

Journal ArticleDOI
TL;DR: Platelet count abnormality can be considered as a major factor in predicting pediatric ALL and the machine learning algorithms can be applied efficiently to provide details for the prognosis for better treatment outcome.
Abstract: Pediatric acute lymphoblastic leukemia (ALL) through machine learning (ML) technique was analyzed to determine the significance of clinical and phenotypic variables as well as environmental conditions that can identify the underlying causes of child ALL. Fifty pediatric patients (n = 50) included who were diagnosed with acute lymphoblastic leukemia (ALL) according to the inclusion and exclusion criteria. Clinical variables comprised of the blood biochemistry (CBC, LFTs, RFTs) results, and distribution of type of ALL, i.e., T ALL or B ALL. Phenotypic data included the age, sex of the child, and consanguinity, while environmental factors included the habitat, socioeconomic status, and access to filtered drinking water. Fifteen different features/attributes were collected for each case individually. To retrieve most useful discriminating attributes, four different supervised ML algorithms were used including classification and regression trees (CART), random forest (RM), gradient boosted machine (GM), and C5.0 decision tree algorithm. To determine the accuracy of the derived CART algorithm on future data, a ten-fold cross validation was performed on the present data set. The ALL was common in children of age below 5 years in male patients whole belonged to middle class family of rural areas. (B-ALL) was most frequent as compared with T-ALL. The consanguinity was present in 54% of cases. Low levels of platelets and hemoglobin and high levels of white blood cells were reported in child ALL patients. CART provided the best and complete fit for the entire data set yielding a 99.83% model fit accuracy, and a misclassification of 0.17% on the entire sample space, while C5.0 reported 98.6%, random forest 94.44%, and gradient boosted machine resulted in 95.61% fitting. The variable importance of each primary discriminating attribute is platelet 43%, hemoglobin 24%, white blood cells 4%, and sex of the child 4%. An overall accuracy of 87.4% was recorded for the classifier. Platelet count abnormality can be considered as a major factor in predicting pediatric ALL. The machine learning algorithms can be applied efficiently to provide details for the prognosis for better treatment outcome.

Journal ArticleDOI
TL;DR: A prognostic model was built that accurately identified obese, hypertensive patients at risk for developing type 2 diabetes mellitus within a 2-year period and may help health care providers make more informed decisions.
Abstract: Prediabetes is a type of hyperglycemia in which patients have blood glucose levels above normal but below the threshold for type 2 diabetes mellitus (T2DM). Prediabetic patients are considered to be at high risk for developing T2DM, but not all will eventually do so. Because it is difficult to identify which patients have an increased risk of developing T2DM, we developed a model of several clinical and laboratory features to predict the development of T2DM within a 2-year period. We used a supervised machine learning algorithm to identify at-risk patients from among 1647 obese, hypertensive patients. The study period began in 2005 and ended in 2018. We constrained data up to 2 years before the development of T2DM. Then, using a time series analysis with the features of every patient, we calculated one linear regression line and one slope per feature. Features were then included in a K-nearest neighbors classification model. Feature importance was assessed using the random forest algorithm. The K-nearest neighbors model accurately classified patients in 96% of cases, with a sensitivity of 99%, specificity of 78%, positive predictive value of 96%, and negative predictive value of 94%. The random forest algorithm selected the homeostatic model assessment-estimated insulin resistance, insulin levels, and body mass index as the most important factors, which in combination with KNN had an accuracy of 99% with a sensitivity of 99% and specificity of 97%. We built a prognostic model that accurately identified obese, hypertensive patients at risk for developing T2DM within a 2-year period. Clinicians may use machine learning approaches to better assess risk for T2DM and better manage hypertensive patients. Machine learning algorithms may help health care providers make more informed decisions.

Journal ArticleDOI
TL;DR: Preliminary results suggested that the proposed algorithm can be effectively applied to the classification of motor imagery EEG signals across sessions and across subjects and the performance is better than that of the traditional machine learning algorithms.
Abstract: Transfer learning enables the adaption of models to handle mismatches of distributions across sessions or across subjects. In this paper, we proposed a new transfer learning algorithm to classify motor imagery EEG data. By analyzing the power spectrum of EEG data related to motor imagery, the shared features across sessions or across subjects, namely, the mean and variance of model parameters, are extracted. Then, select the data sets that were most relevant to the new data set according to Euclidean distance to update the shared features. Finally, utilize the shared features and subject/session-specific features jointly to generate a new model. We evaluated our algorithm by analyzing the motor imagery EEG data from 10 healthy participants and a public data set from BCI competition IV. The classification accuracy of the proposed transfer learning is higher than that of traditional machine learning algorithms. The results of the paired t test showed that the classification results of PSD and the transfer learning algorithm were significantly different (p = 2.0946e-9), and the classification results of CSP and the transfer learning algorithm were significantly different (p = 1.9122e-6). The test accuracy of data set 2a of BCI competition IV was 85.7% ± 5.4%, which was higher than that of related traditional machine learning algorithms. Preliminary results suggested that the proposed algorithm can be effectively applied to the classification of motor imagery EEG signals across sessions and across subjects and the performance is better than that of the traditional machine learning algorithms. It can be promising to be applied to the field of brain-computer interface (BCI).

Journal ArticleDOI
TL;DR: The performance comparison of the machine learning algorithms k-Nearest Neighborhood (k-NN), Random Forest, Naive Bayes, and Support Vector Machine classifiers was performed and the best classifier was determined for the diagnosis of Parkinson's disease.
Abstract: Parkinson’s disease is a neurological disorder that causes partial or complete loss of motor reflexes and speech and affects thinking, behavior, and other vital functions affecting the nervous system. Parkinson’s disease causes impaired speech and motor abilities (writing, balance, etc.) in about 90% of patients and is often seen in older people. Some signs (deterioration of vocal cords) in medical voice recordings from Parkinson’s patients are used to diagnose this disease. The database used in this study contains biomedical speech voice from 31 people of different age and sex related to this disease. The performance comparison of the machine learning algorithms k-Nearest Neighborhood (k-NN), Random Forest, Naive Bayes, and Support Vector Machine classifiers was performed with the used database. Moreover, the best classifier was determined for the diagnosis of Parkinson’s disease. Eleven different training and test data (45 × 55, 50 × 50, 55 × 45, 60 × 40, 65 × 35, 70 × 30, 75 × 25, 80 × 20, 85 × 15, 90 × 10, 95 × 5) were processed separately. The data obtained from these training and tests were compared with statistical measurements. The training results of the k-NN classification algorithm were generally 100% successful. The best test result was obtained from Random Forest classifier with 85.81%. All statistical results and measured values are given in detail in the experimental studies section. Graphical abstract

Journal ArticleDOI
TL;DR: DeepSurvNet is a reliable classifier for brain cancer patients’ survival rate classification based on histopathological images and is concluded that DeepSurvNet constitutes a new artificial intelligence tool to assess the survival rate in brain cancer.
Abstract: Histopathological whole slide images of haematoxylin and eosin (HE class II, 6–12 months; class III, 12–24 months; and class IV, >24 months survival after diagnosis). After training and testing of DeepSurvNet model on a public brain cancer dataset, The Cancer Genome Atlas, we have generalized it using independent testing on unseen samples. Using DeepSurvNet, we obtained precisions of 0.99 and 0.8 in the testing phases on the mentioned datasets, respectively, which shows DeepSurvNet is a reliable classifier for brain cancer patients’ survival rate classification based on histopathological images. Finally, analysis of the frequency of mutations revealed differences in terms of frequency and type of genes associated to each class, supporting the idea of a different genetic fingerprint associated to patient survival. We conclude that DeepSurvNet constitutes a new artificial intelligence tool to assess the survival rate in brain cancer.

Journal ArticleDOI
TL;DR: Cardiologists worldwide are offered to carry out hemodynamic analysis of the medically imaged coronary arteries of their patients and compute the values of the hemodynamic parameters of WSS and WPG, so as to provide them an assessment of the risk of atherosclerosis for their patients.
Abstract: Coronary arteries have high curvatures, and hence, flow through them causes disturbed flow patterns, resulting in stenosis and atherosclerosis. This in turn decreases the myocardial flow perfusion, causing myocardial ischemia and infarction. Therefore, in order to understand the mechanisms of these phenomena caused by high curvatures and branching of coronary arteries, we have conducted elaborate hemodynamic analysis for both (i) idealized coronary arteries with geometrical parameters representing realistic curvatures and stenosis and (ii) patient-specific coronary arteries with stenoses. Firstly, in idealized coronary arteries with approximated realistic arterial geometry representative of their curvedness and stenosis, we have computed the hemodynamic parameters of pressure drop, wall shear stress (WSS) and wall pressure gradient (WPG), and their association with the geometrical parameters of curvedness and stenosis. Secondly, we have similarly determined the wall shear stress and wall pressure gradient distributions in four patient-specific curved stenotic right coronary arteries (RCAs), which were reconstructed from medical images of patients diagnosed with atherosclerosis and stenosis; our results show high WSS and WPG regions at the stenoses and inner wall of the arterial curves. This paper provides useful insights into the causative mechanisms of the high incidence of atherosclerosis in coronary arteries. It also provides guidelines for how simulation of blood flow in patient's coronary arteries and determination of the hemodynamic parameters of WSS and WPG can provide a medical assessment of the risk of development of atherosclerosis and plaque formation, leading to myocardial ischemia and infarction. The novelty of our paper is in our showing how in actual coronary arteries (based on their CT imaging) curvilinearity and narrowing complications affect the computed WSS and WPG, associated with risk of atherosclerosis. This is very important for cardiologists to be able to properly take care of their patients and provide remedial measures before coronary complications lead to myocardial infarctions and necessitate stenting or coronary bypass surgery. We want to go one step further and provide clinical application of our research work. For that, we are offering to cardiologists worldwide to carry out hemodynamic analysis of the medically imaged coronary arteries of their patients and compute the values of the hemodynamic parameters of WSS and WPG, so as to provide them an assessment of the risk of atherosclerosis for their patients. Graphical abstract Theme and aims: Coronary arteries have high curvatures, and hence flow through them causes disturbed flow patterns, resulting in stenosis and atherosclerosis. This in turn decreases the myocardial flow perfusion, causing myocardial ischemia and infarction. Therefore, in order to understand the mechanisms of these phenomena caused by high curvatures and branching of coronary arteries, we have conducted elaborate hemodynamic analysis for both (i) idealized coronary arteries with geometrical parameters representing curvatures and stenosis, and (ii) patient-specific coronary arteries with stenoses. Methods and results: Firstly, in idealized coronary arteries with approximated realistic arterial geometry representative of their curvedness and stenosis, we have computed the hemodynamic parameters of pressure drop, wall shear stress (WSS) and wall pressure gradient (WPG), and their association with the geometrical parameters of curvedness and stenosis. Then, we have determined the wall shear stress and wall pressure gradient distributions in four patient-specific curved stenotic right coronary arteries (RCAs), that were reconstructed from medical images of patients diagnosed with atherosclerosis and stenosis, as illustrated in Figure 1, in which the locations of the stenoses are highlighted by arrows. Figure 1: Three-dimensional CT visualization of arteries in patients with suspected coronary disease. The arteries can be seen as a combination of various curved segments with stenoses at unspecific locations highlighted by arrows. Our results show high WSS and WPG regions at the stenoses and inner wall of the arterial curves, as depicted in Figure 2. Therein, the encapsulations show (i) high WSS, and (ii) high WPG regions at the stenosis and inner wall of the arterial curves. Figure 2: WSS and WPG surface plot of realistic arteries (a), (b), (c) and (d), wherein the small squared parts are enlarged to show the detailed localized contour plots at the stenotic regions. Therein, the circular encapsulations show (i) high WSS and (ii) high WPG regions at the stenosis and inner wall of the arterial curves. Conclusion and novelty: This paper provides useful insights into the causative mechanisms of the high incidence of atherosclerosis in coronary arteries. It also provides guidelines for how simulation of blood flow in patient coronary arteries and determination of the hemodynamic parameters of WSS and WPG can provide a medical assessment of the risk of development of atherosclerosis and plaque formation, leading to myocardial ischemia and infarction. The novelty of our paper is our showing how in actual coronary arteries (based on their CT imaging), curvilinearity and narrowing complications affect the computed WSS and WPG associated with risk of atherosclerosis. This is very important for cardiologists to be able to properly take care of their patients and provide remedial measures before coronary complications lead to myocardial infarctions and necessitate stenting or coronary bypass surgery.

Journal ArticleDOI
TL;DR: The results demonstrate that genomic data are correlated with the GBM OS prediction, and the radiogenomic model outperforms both radiomic and genomic models.
Abstract: Glioblastoma multiforme (GBM) is a very aggressive and infiltrative brain tumor with a high mortality rate. There are radiomic models with handcrafted features to estimate glioblastoma prognosis. In this work, we evaluate to what extent of combining genomic with radiomic features makes an impact on the prognosis of overall survival (OS) in patients with GBM. We apply a hypercolumn-based convolutional network to segment tumor regions from magnetic resonance images (MRI), extract radiomic features (geometric, shape, histogram), and fuse with gene expression profiling data to predict survival rate for each patient. Several state-of-the-art regression models such as linear regression, support vector machine, and neural network are exploited to conduct prognosis analysis. The Cancer Genome Atlas (TCGA) dataset of MRI and gene expression profiling is used in the study to observe the model performance in radiomic, genomic, and radiogenomic features. The results demonstrate that genomic data are correlated with the GBM OS prediction, and the radiogenomic model outperforms both radiomic and genomic models. We further illustrate the most significant genes, such as IL1B, KLHL4, ATP1A2, IQGAP2, and TMSL8, which contribute highly to prognosis analysis. Graphical Abstract Our Proposed fully automated "Radiogenomic"" approach for survival prediction overview. It fuses geometric, intensity, volumetric, genomic and clinical information to predict OS.

Journal ArticleDOI
Fan Guo1, Weiqing Li1, Jin Tang1, Beiji Zou1, Zhun Fan2 
TL;DR: An improved UNet++ neural network is proposed to segment the optic disc and optic cup based on region of interest (ROI) simultaneously and a gradient boosting decision tree (GBDT) classifier for glaucoma screening is trained.
Abstract: Glaucoma is a chronic disease that threatens eye health and can cause permanent blindness. Since there is no cure for glaucoma, early screening and detection are crucial for the prevention of glaucoma. Therefore, a novel method for automatic glaucoma screening that combines clinical measurement features with image-based features is proposed in this paper. To accurately extract clinical measurement features, an improved UNet++ neural network is proposed to segment the optic disc and optic cup based on region of interest (ROI) simultaneously. Some important clinical measurement features, such as optic cup to disc ratio, are extracted from the segmentation results. Then, the increasing field of view (IFOV) feature model is proposed to fully extract texture features, statistical features, and other hidden image-based features. Next, we select the best feature combination from all the features and use the adaptive synthetic sampling approach to alleviate the uneven distribution of training data. Finally, a gradient boosting decision tree (GBDT) classifier for glaucoma screening is trained. Experimental results based on the ORIGA dataset show that the proposed algorithm achieves excellent glaucoma screening performance with sensitivity of 0.894, accuracy of 0.843, and AUC of 0.901, which is superior to other existing methods.Graphical abstract Framework of the proposed glaucoma classification method.

Journal ArticleDOI
TL;DR: A novel end-to-end deep learning network to automatically measure the fetal head circumference, biparietal diameter, and occipitofrontal diameter from 2D ultrasound images and results show that the method can achieve better performance than the existing fetal head measurement methods.
Abstract: Measurement of anatomical structures from ultrasound images requires the expertise of experienced clinicians. Moreover, there are artificial factors that make an automatic measurement complicated. In this paper, we aim to present a novel end-to-end deep learning network to automatically measure the fetal head circumference (HC), biparietal diameter (BPD), and occipitofrontal diameter (OFD) length from 2D ultrasound images. Fully convolutional neural networks (FCNNs) have shown significant improvement in natural image segmentation. Therefore, to overcome the potential difficulties in automated segmentation, we present a novelty FCNN and add a regression branch for predicting OFD and BPD in parallel. In the segmentation branch, a feature pyramid inside our network is built from low-level feature layers for a variety of fetal head in ultrasound images, which is different from traditional feature pyramid building methods. In order to select the most useful scale and reduce scale noise, attention mechanism is taken for the feature’s filter. In the regression branch, for the accurate estimation of OFD and BPD length, a new region of interest (ROI) pooling layer is proposed to extract the elliptic feature map. We also evaluate the performance of our method on large dataset: HC18. Our experimental results show that our method can achieve better performance than the existing fetal head measurement methods.

Journal ArticleDOI
TL;DR: Investigation of the effects of two modifications to the locking compression plate (LCP) suggested the modifications could lead to improved performances of fracture fixation, and therefore likely that other orthopaedic implants survivorship could also be enhanced by customisation via 3D printing.
Abstract: 3D printing allows product customisation to be cost efficient. This presents opportunity for innovation. This study investigated the effects of two modifications to the locking compression plate (LCP), an established orthopaedic implant used for fracture fixation. The first was to fill unused screw holes over the fracture site. The second was to reduce the Young’s modulus by changing the microarchitecture of the LCP. Both are easily customisable with 3D printing. Finite element (FE) models of a fractured human tibia fixed with 4.5/5.0 mm LCPs were created. FE simulations were conducted to examine stress distribution within the LCPs. Next, a material sweep was performed to examine the effects of lowering the Young’s modulus of the LCPs. Results showed at a knee joint loading of 3× body weight, peak stress was lowered in the modified broad LCP at 390.0 MPa compared to 565.1 MPa in the original LCP. It also showed that the Young’s modulus of material could be lowered to 50 GPa before the minimum principal stresses increased exponentially. These findings suggested the modifications could lead to improved performances of fracture fixation, and therefore likely that other orthopaedic implants survivorship could also be enhanced by customisation via 3D printing.

Journal ArticleDOI
TL;DR: The investigated combinedtcPO2, tcPCO2, and SpO2 sensor with a new oxygen fluorescence quenching technique is clinically usable and provides good overall accuracy and negligible tcPO2 drift.
Abstract: This study investigated the accuracy, drift, and clinical usefulness of a new optical transcutaneous oxygen tension (tcPO2) measuring technique, combined with a conventional electrochemical transcutaneous carbon dioxide (tcPCO2) measurement and reflectance pulse oximetry in the novel transcutaneous OxiVenT™ Sensor. In vitro gas studies were performed to measure accuracy and drift of tcPO2 and tcPCO2. Clinical usefulness for tcPO2 and tcPCO2 monitoring was assessed in neonates. In healthy adult volunteers, measured oxygen saturation values (SpO2) were compared with arterially sampled oxygen saturation values (SaO2) during controlled hypoxemia. In vitro correlation and agreement with gas mixtures of tcPO2 (r = 0.999, bias 3.0 mm Hg, limits of agreement − 6.6 to 4.9 mm Hg) and tcPCO2 (r = 0.999, bias 0.8 mm Hg, limits of agreement − 0.7 to 2.2 mm Hg) were excellent. In vitro drift was negligible for tcPO2 (0.30 (0.63 SD) mm Hg/24 h) and highly acceptable for tcPCO2 (− 2.53 (1.04 SD) mm Hg/12 h). Clinical use in neonates showed good usability and feasibility. SpO2-SaO2 correlation (r = 0.979) and agreement (bias 0.13%, limits of agreement − 3.95 to 4.21%) in healthy adult volunteers were excellent. The investigated combined tcPO2, tcPCO2, and SpO2 sensor with a new oxygen fluorescence quenching technique is clinically usable and provides good overall accuracy and negligible tcPO2 drift. Accurate and low-drift tcPO2 monitoring offers improved measurement validity for long-term monitoring of blood and tissue oxygenation. [Figure not available: see fulltext.].

Journal ArticleDOI
TL;DR: A machine-based method that differentiates between the two types of liver cancers from multi-phase abdominal computerized tomography (CT) scans is developed and has a great potential in helping radiologists diagnosing liver cancer.
Abstract: Liver and bile duct cancers are leading causes of worldwide cancer death. The most common ones are hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC). Influencing factors and prognosis of HCC and ICC are different. Precise classification of these two liver cancers is essential for treatment and prevention plans. The aim of this study is to develop a machine-based method that differentiates between the two types of liver cancers from multi-phase abdominal computerized tomography (CT) scans. The proposed method consists of two major steps. In the first step, the liver is segmented from the original images using a convolutional neural network model, together with task-specific pre-processing and post-processing techniques. In the second step, by looking at the intensity histograms of the segmented images, we extract features from regions that are discriminating between HCC and ICC, and use them as an input for classification using support vector machine model. By testing on a dataset of labeled multi-phase CT scans provided by Maharaj Nakorn Chiang Mai Hospital, Thailand, we have obtained 88% in classification accuracy. Our proposed method has a great potential in helping radiologists diagnosing liver cancer.

Journal ArticleDOI
TL;DR: The aim of this paper is to survey the vision-based CAD systems especially focusing on the segmentation techniques for the pathological bone disease known as osteoporosis, and gives the future directions to improve the osteopOrosis diagnosis.
Abstract: Computer-aided diagnosis (CAD) has revolutionized the field of medical diagnosis. They assist in improving the treatment potentials and intensify the survival frequency by early diagnosing the diseases in an efficient, timely, and cost-effective way. The automatic segmentation has led the radiologist to successfully segment the region of interest to improve the diagnosis of diseases from medical images which is not so efficiently possible by manual segmentation. The aim of this paper is to survey the vision-based CAD systems especially focusing on the segmentation techniques for the pathological bone disease known as osteoporosis. Osteoporosis is the state of the bones where the mineral density of bones decreases and they become porous, making the bones easily susceptible to fractures by small injury or a fall. The article covers the image acquisition techniques for acquiring the medical images for osteoporosis diagnosis. The article also discusses the advanced machine learning paradigms employed in segmentation for osteoporosis disease. Other image processing steps in osteoporosis like feature extraction and classification are also briefly described. Finally, the paper gives the future directions to improve the osteoporosis diagnosis and presents the proposed architecture. Graphical abstract.