scispace - formally typeset
Search or ask a question

Showing papers in "Biomedical Engineering Online in 2019"


Journal ArticleDOI
TL;DR: The technological principles of processing collagen rich tissues down to collagen hydrolysates and the methods to rebuild differently shaped products are given and the effects of the processing steps on the final materials properties are discussed.
Abstract: Collagen, the most abundant extracellular matrix protein in animal kingdom belongs to a family of fibrous proteins, which transfer load in tissues and which provide a highly biocompatible environment for cells. This high biocompatibility makes collagen a perfect biomaterial for implantable medical products and scaffolds for in vitro testing systems. To manufacture collagen based solutions, porous sponges, membranes and threads for surgical and dental purposes or cell culture matrices, collagen rich tissues as skin and tendon of mammals are intensively processed by physical and chemical means. Other tissues such as pericardium and intestine are more gently decellularized while maintaining their complex collagenous architectures. Tissue processing technologies are organized as a series of steps, which are combined in different ways to manufacture structurally versatile materials with varying properties in strength, stability against temperature and enzymatic degradation and cellular response. Complex structures are achieved by combined technologies. Different drying techniques are performed with sterilisation steps and the preparation of porous structures simultaneously. Chemical crosslinking is combined with casting steps as spinning, moulding or additive manufacturing techniques. Important progress is expected by using collagen based bio-inks, which can be formed into 3D structures and combined with live cells. This review will give an overview of the technological principles of processing collagen rich tissues down to collagen hydrolysates and the methods to rebuild differently shaped products. The effects of the processing steps on the final materials properties are discussed especially with regard to the thermal and the physical properties and the susceptibility to enzymatic degradation. These properties are key features for biological and clinical application, handling and metabolization.

274 citations


Journal ArticleDOI
TL;DR: Using ImageNet-trained models is a robust alternative for automatic glaucoma screening system and the high specificity and sensitivity obtained are supported by an extensive validation using not only the cross-validation strategy but also theCross-testing validation on, to the best of the authors’ knowledge, all publicly available glAUcoma-labelled databases.
Abstract: Most current algorithms for automatic glaucoma assessment using fundus images rely on handcrafted features based on segmentation, which are affected by the performance of the chosen segmentation method and the extracted features. Among other characteristics, convolutional neural networks (CNNs) are known because of their ability to learn highly discriminative features from raw pixel intensities. In this paper, we employed five different ImageNet-trained models (VGG16, VGG19, InceptionV3, ResNet50 and Xception) for automatic glaucoma assessment using fundus images. Results from an extensive validation using cross-validation and cross-testing strategies were compared with previous works in the literature. Using five public databases (1707 images), an average AUC of 0.9605 with a 95% confidence interval of 95.92–97.07%, an average specificity of 0.8580 and an average sensitivity of 0.9346 were obtained after using the Xception architecture, significantly improving the performance of other state-of-the-art works. Moreover, a new clinical database, ACRIMA, has been made publicly available, containing 705 labelled images. It is composed of 396 glaucomatous images and 309 normal images, which means, the largest public database for glaucoma diagnosis. The high specificity and sensitivity obtained from the proposed approach are supported by an extensive validation using not only the cross-validation strategy but also the cross-testing validation on, to the best of the authors’ knowledge, all publicly available glaucoma-labelled databases. These results suggest that using ImageNet-trained models is a robust alternative for automatic glaucoma screening system. All images, CNN weights and software used to fine-tune and test the five CNNs are publicly available, which could be used as a testbed for further comparisons.

194 citations


Journal ArticleDOI
TL;DR: In this article, an automatic analysis of retinal images using convolutional neural network (CNN) is presented, which incorporates a novel technique utilizing a two-stage process with two online datasets which results in accurate detection while solving the imbalance data problem and decreasing training time in comparison with previous studies.
Abstract: Diabetic retinopathy (DR) is the leading cause of blindness worldwide, and therefore its early detection is important in order to reduce disease-related eye injuries. DR is diagnosed by inspecting fundus images. Since microaneurysms (MA) are one of the main symptoms of the disease, distinguishing this complication within the fundus images facilitates early DR detection. In this paper, an automatic analysis of retinal images using convolutional neural network (CNN) is presented. Our method incorporates a novel technique utilizing a two-stage process with two online datasets which results in accurate detection while solving the imbalance data problem and decreasing training time in comparison with previous studies. We have implemented our proposed CNNs using the Keras library. In order to evaluate our proposed method, an experiment was conducted on two standard publicly available datasets, i.e., Retinopathy Online Challenge dataset and E-Ophtha-MA dataset. Our results demonstrated a promising sensitivity value of about 0.8 for an average of >6 false positives per image, which is competitive with state of the art approaches. Our method indicates significant improvement in MA-detection using retinal fundus images for monitoring diabetic retinopathy.

91 citations


Journal ArticleDOI
TL;DR: In this paper, the technical development of various CAS diagnosis imaging modalities and its impact on the clinical efficacy is thoroughly reviewed.
Abstract: In the past few decades, imaging has been developed to a high level of sophistication. Improvements from one-dimension (1D) to 2D images, and from 2D images to 3D models, have revolutionized the field of imaging. This not only helps in diagnosing various critical and fatal diseases in the early stages but also contributes to making informed clinical decisions on the follow-up treatment profile. Carotid artery stenosis (CAS) may potentially cause debilitating stroke, and its accurate early detection is therefore important. In this paper, the technical development of various CAS diagnosis imaging modalities and its impact on the clinical efficacy is thoroughly reviewed. These imaging modalities include duplex ultrasound (DUS), computed tomography angiography (CTA) and magnetic resonance angiography (MRA). For each of the imaging modalities considered, imaging methodology (principle), critical imaging parameters, and the extent of imaging the vulnerable plaque are discussed. DUS is usually the initial recommended CAS diagnostic examination. However, for the therapeutic intervention, either MRA or CTA is recommended for confirmation, and for added information on intracranial cerebral circulation and aortic arch condition for procedural planning. Over the past few decades, the focus of CAS diagnosis has also shifted from pure stenosis quantification to plaque characterization. This has led to further advancement in the existing imaging tools and development of other potential imaging tools like Optical coherence tomography (OCT), photoacoustic tomography (PAT), and infrared (IR) thermography.

86 citations


Journal ArticleDOI
TL;DR: The results demonstrated that the proposed clustering algorithm based method can generate the training dataset for CNN models, and can segment lung parenchyma with very satisfactory performance and have the potential to locate and analyze lung lesions.
Abstract: Lung segmentation constitutes a critical procedure for any clinical-decision supporting system aimed to improve the early diagnosis and treatment of lung diseases. Abnormal lungs mainly include lung parenchyma with commonalities on CT images across subjects, diseases and CT scanners, and lung lesions presenting various appearances. Segmentation of lung parenchyma can help locate and analyze the neighboring lesions, but is not well studied in the framework of machine learning. We proposed to segment lung parenchyma using a convolutional neural network (CNN) model. To reduce the workload of manually preparing the dataset for training the CNN, one clustering algorithm based method is proposed firstly. Specifically, after splitting CT slices into image patches, the k-means clustering algorithm with two categories is performed twice using the mean and minimum intensity of image patch, respectively. A cross-shaped verification, a volume intersection, a connected component analysis and a patch expansion are followed to generate final dataset. Secondly, we design a CNN architecture consisting of only one convolutional layer with six kernels, followed by one maximum pooling layer and two fully connected layers. Using the generated dataset, a variety of CNN models are trained and optimized, and their performances are evaluated by eightfold cross-validation. A separate validation experiment is further conducted using a dataset of 201 subjects (4.62 billion patches) with lung cancer or chronic obstructive pulmonary disease, scanned by CT or PET/CT. The segmentation results by our method are compared with those yielded by manual segmentation and some available methods. A total of 121,728 patches are generated to train and validate the CNN models. After the parameter optimization, our CNN model achieves an average F-score of 0.9917 and an area of curve up to 0.9991 for classification of lung parenchyma and non-lung-parenchyma. The obtain model can segment the lung parenchyma accurately for 201 subjects with heterogeneous lung diseases and CT scanners. The overlap ratio between the manual segmentation and the one by our method reaches 0.96. The results demonstrated that the proposed clustering algorithm based method can generate the training dataset for CNN models. The obtained CNN model can segment lung parenchyma with very satisfactory performance and have the potential to locate and analyze lung lesions.

77 citations


Journal ArticleDOI
TL;DR: The proposed system has the capability of analyzing a full pap-smear slide within 3 min as opposed to the 5–10 min per slide in the manual analysis, and reduces on the time required by the cytotechnician to screen very many pap- smears by eliminating the obvious normal ones, hence more time can be put on the suspicious slides.
Abstract: Cervical cancer is preventable if effective screening measures are in place. Pap-smear is the commonest technique used for early screening and diagnosis of cervical cancer. However, the manual analysis of the pap-smears is error prone due to human mistake, moreover, the process is tedious and time-consuming. Hence, it is beneficial to develop a computer-assisted diagnosis tool to make the pap-smear test more accurate and reliable. This paper describes the development of a tool for automated diagnosis and classification of cervical cancer from pap-smear images. Scene segmentation was achieved through a Trainable Weka Segmentation classifier and a sequential elimination approach was used for debris rejection. Feature selection was achieved using simulated annealing integrated with a wrapper filter, while classification was achieved using a fuzzy C-means algorithm. The evaluation of the classifier was carried out on three different datasets (single cell images, multiple cell images and pap-smear slide images from a pathology lab). Overall classification accuracy, sensitivity and specificity of ‘98.88%, 99.28% and 97.47%’, ‘97.64%, 98.08% and 97.16%’ and ‘95.00%, 100% and 90.00%’ were obtained for each dataset, respectively. The higher accuracy and sensitivity of the classifier was attributed to the robustness of the feature selection method that accurately selected cell features that improved the classification performance and the number of clusters used during defuzzification and classification. Results show that the method outperforms many of the existing algorithms in sensitivity (99.28%), specificity (97.47%), and accuracy (98.88%) when applied to the Herlev benchmark pap-smear dataset. False negative rate, false positive rate and classification error of 0.00%, 10.00% and 5.00%, respectively were obtained when applied to pap-smear slides from a pathology lab. The major contribution of this tool in a cervical cancer screening workflow is that it reduces on the time required by the cytotechnician to screen very many pap-smears by eliminating the obvious normal ones, hence more time can be put on the suspicious slides. The proposed system has the capability of analyzing a full pap-smear slide within 3 min as opposed to the 5–10 min per slide in the manual analysis. The tool presented in this paper is applicable to many pap-smear analysis systems but is particularly pertinent to low-cost systems that should be of significant benefit to developing economies.

67 citations


Journal ArticleDOI
Na Qu1, Yating Sun1, Yujing Li1, Fei Hao1, Pengyu Qiu1, Lesheng Teng1, Jing Xie1, Yin Gao1 
TL;DR: A DTX-loaded human serum albumin (HSA) nanoparticle (DTX-NP) was designed to overcome the hypersensitivity reactions that are induced by polysorbate 80 and yielded similar anti-tumor activity but were accompanied by less systemic toxicity than solvent formulated DTX.
Abstract: Docetaxel (DTX) is an anticancer drug that is currently formulated with polysorbate 80 and ethanol (50:50, v/v) in clinical use. Unfortunately, this formulation causes hypersensitivity reactions, leading to severe side-effects, which have been primarily attributed to polysorbate 80. In this study, a DTX-loaded human serum albumin (HSA) nanoparticle (DTX-NP) was designed to overcome the hypersensitivity reactions that are induced by polysorbate 80. The methods of preparing the DTX-NPs have been optimized based on factors including the drug-to-HSA weight ratio, the duration of HSA incubation, and the choice of using a stabilizer. Synthesized DTX-NPs were characterized with regard to their particle diameters, drug loading capacities, and drug release kinetics. The morphology of the DTX-NPs was observed via scanning electron microscopy (SEM) and the successful preparation of DTX-NPs was confirmed via differential scanning calorimetry (DSC). The cytotoxicity and cellular uptake of DTX-NPs were investigated in the non-small cell lung cancer cell line A549 and the maximum tolerated dose (MTD) of DTX-NPs was evaluated via investigations with BALB/c mice. The study showed that the loading capacity and the encapsulation efficiency of DTX-NPs prepared under the optimal conditions was 11.2 wt% and 63.1 wt%, respectively and the mean diameter was less than 200 nm, resulting in higher permeability and controlled release. Similar cytotoxicity against A549 cells was exhibited by the DTX-NPs in comparison to DTX alone while higher maximum tolerated dose (MTD) with the DTX-NPs (75 mg/kg) than with DTX (30 mg/kg) was demonstrated in mice, suggesting that the DTX-NPs prepared with HSA yielded similar anti-tumor activity but were accompanied by less systemic toxicity than solvent formulated DTX. DTX-NPs warrant further investigation and are promising candidates for clinical applications.

54 citations


Journal ArticleDOI
TL;DR: A new preprocessing pipeline named multiple-channels-multiple-landmarks (MCML), aiming to synthesize color fundus images from a combination of vessel tree, optic disc, and optic cup images is proposed, which outperforms the single vessel-based methods for each architecture of GANs.
Abstract: Medical datasets, especially medical images, are often imbalanced due to the different incidences of various diseases. To address this problem, many methods have been proposed to synthesize medical images using generative adversarial networks (GANs) to enlarge training datasets for facilitating medical image analysis. For instance, conventional methods such as image-to-image translation techniques are used to synthesize fundus images with their respective vessel trees in the field of fundus image. In order to improve the image quality and details of the synthetic images, three key aspects of the pipeline are mainly elaborated: the input mask, architecture of GANs, and the resolution of paired images. We propose a new preprocessing pipeline named multiple-channels-multiple-landmarks (MCML), aiming to synthesize color fundus images from a combination of vessel tree, optic disc, and optic cup images. We compared both single vessel mask input and MCML mask input on two public fundus image datasets (DRIVE and DRISHTI-GS) with different kinds of Pix2pix and Cycle-GAN architectures. A new Pix2pix structure with ResU-net generator is also designed, which has been compared with the other models. As shown in the results, the proposed MCML method outperforms the single vessel-based methods for each architecture of GANs. Furthermore, we find that our Pix2pix model with ResU-net generator achieves superior PSNR and SSIM performance than the other GANs. High-resolution paired images are also beneficial for improving the performance of each GAN in this work. Finally, a Pix2pix network with ResU-net generator using MCML and high-resolution paired images are able to generate good and realistic fundus images in this work, indicating that our MCML method has great potential in the field of glaucoma computer-aided diagnosis based on fundus image.

54 citations


Journal ArticleDOI
TL;DR: This study provides systematic preclinical evidence that the silk fibroin promotes wound healing as a wound-healing dressing, thereby establishing a foundation toward its further application for new treatment options of wound repair and regeneration.
Abstract: Silk fibroin hydrogel, derived from Bombyx mori cocoons, has been shown to have potential effects on wound healing due to its excellent biocompatibility and less immunogenic and biodegradable properties. Many studies suggest silk fibroin as a promising material of wound dressing and it can support the adhesion and proliferation of a variety of human cells in vitro. However, lack of translational evidence has hampered its clinical applications for skin repair. Herein, a heparin-immobilized fibroin hydrogel was fabricated to deliver FGF1 (human acidic fibroblast growth factor 1) on top of wound in rats with full-thickness skin excision by performing comprehensive preclinical studies to fully evaluate its safety and effectiveness. The wound-healing efficiency of developed fibroin hydrogels was evaluated in full-thickness wound model of rats, compared with the chitosan used clinically. The water absorption, swelling ratio, accumulative FGF1 releasing rate and biodegradation ratio of fabricated hydrogels were measured. The regenerated fibroin hydrogels with good water uptake properties rapidly swelled to a 17.3-fold maximum swelling behavior over 12 h and a total amount of 40.48 ± 1.28% hydrogels was lost within 15 days. Furthermore, accumulative releasing data suggested that heparinized hydrogels possessed effective release behavior of FGF1. Then full-thickness skin excision was created in rats and left untreated or covered with heparinized fibroin hydrogels-immobilized recombinant human FGF1. The histological evaluation using hematoxylin and eosin (HE) and Masson’s trichrome (MT) staining was performed to observe the dermic formation and collagen deposition on the wound-healing site. To evaluate the wound-healing mechanisms induced by fibroin hydrogel treatment, wound-healing scratch and cell proliferation assay were performed. it was found that both fibroin hydrogels and FGF1 can facilitate the migration of fibroblast L929 cells proliferation and migration. This study provides systematic preclinical evidence that the silk fibroin promotes wound healing as a wound-healing dressing, thereby establishing a foundation toward its further application for new treatment options of wound repair and regeneration.

46 citations


Journal ArticleDOI
Haihan Duan1, Yunzhi Huang1, Lunxin Liu1, Huming Dai1, Liangyin Chen1, Liangxue Zhou1 
TL;DR: This study is a demonstration that it is feasible to assist physicians to detect intracranial aneurysm on DSA images using CNN and illustrated that the proposed two-stage CNN-based architecture was more accurate and faster compared with the existing research studies of classical DIP methods.
Abstract: An intracranial aneurysm is a cerebrovascular disorder that can result in various diseases. Clinically, diagnosis of an intracranial aneurysm utilizes digital subtraction angiography (DSA) modality as gold standard. The existing automatic computer-aided diagnosis (CAD) research studies with DSA modality were based on classical digital image processing (DIP) methods. However, the classical feature extraction methods were badly hampered by complex vascular distribution, and the sliding window methods were time-consuming during searching and feature extraction. Therefore, developing an accurate and efficient CAD method to detect intracranial aneurysms on DSA images is a meaningful task. In this study, we proposed a two-stage convolutional neural network (CNN) architecture to automatically detect intracranial aneurysms on 2D-DSA images. In region localization stage (RLS), our detection system can locate a specific region to reduce the interference of the other regions. Then, in aneurysm detection stage (ADS), the detector could combine the information of frontal and lateral angiographic view to identify intracranial aneurysms, with a false-positive suppression algorithm. Our study was experimented on posterior communicating artery (PCoA) region of internal carotid artery (ICA). The data set contained 241 subjects for model training, and 40 prospectively collected subjects for testing. Compared with the classical DIP method which had an accuracy of 62.5% and an area under curve (AUC) of 0.69, the proposed architecture could achieve accuracy of 93.5% and the AUC of 0.942. In addition, the detection time cost of our method was about 0.569 s, which was one hundred times faster than the classical DIP method of 62.546 s. The results illustrated that our proposed two-stage CNN-based architecture was more accurate and faster compared with the existing research studies of classical DIP methods. Overall, our study is a demonstration that it is feasible to assist physicians to detect intracranial aneurysm on DSA images using CNN.

38 citations


Journal ArticleDOI
TL;DR: The proposed two-stage grading system to automatically evaluate breast tumors from ultrasound images into five categories based on convolutional neural networks (CNNs) can extract effective features from the breast ultrasound images for the final classification of breast tumors by decoupling the identification features and classification features with different CNNs.
Abstract: Quantizing the Breast Imaging Reporting and Data System (BI-RADS) criteria into different categories with the single ultrasound modality has always been a challenge. To achieve this, we proposed a two-stage grading system to automatically evaluate breast tumors from ultrasound images into five categories based on convolutional neural networks (CNNs). This new developed automatic grading system was consisted of two stages, including the tumor identification and the tumor grading. The constructed network for tumor identification, denoted as ROI-CNN, can identify the region contained the tumor from the original breast ultrasound images. The following tumor categorization network, denoted as G-CNN, can generate effective features for differentiating the identified regions of interest (ROIs) into five categories: Category “3”, Category “4A”, Category “4B”, Category “4C”, and Category “5”. Particularly, to promote the predictions identified by the ROI-CNN better tailor to the tumor, refinement procedure based on Level-set was leveraged as a joint between the stage and grading stage. We tested the proposed two-stage grading system against 2238 cases with breast tumors in ultrasound images. With the accuracy as an indicator, our automatic computerized evaluation for grading breast tumors exhibited a performance comparable to that of subjective categories determined by physicians. Experimental results show that our two-stage framework can achieve the accuracy of 0.998 on Category “3”, 0.940 on Category “4A”, 0.734 on Category “4B”, 0.922 on Category “4C”, and 0.876 on Category “5”. The proposed scheme can extract effective features from the breast ultrasound images for the final classification of breast tumors by decoupling the identification features and classification features with different CNNs. Besides, the proposed scheme can extend the diagnosing of breast tumors in ultrasound images to five sub-categories according to BI-RADS rather than merely distinguishing the breast tumor malignant from benign.

Journal ArticleDOI
TL;DR: This study demonstrated that a very accurate radial pressure waveform can be reproduced using the cam-based simulator, and it can be concluded that the same testing and design methods can be used to generate pulse waveforms for other age groups or any target pulseWaveforms.
Abstract: There exists a growing need for a cost-effective, reliable, and portable pulsation simulator that can generate a wide variety of pulses depending on age and cardiovascular disease. For constructing compact pulsation simulator, this study proposes to use a pneumatic actuator based on cam-follower mechanism controlled by a DC motor. The simulator is intended to generate pulse waveforms for a range of pulse pressures and heart beats that are realistic to human blood pulsations. This study first performed in vivo testing of a healthy young man to collect his pulse waveforms using a robotic tonometry system (RTS). Based on the collected data a representative human radial pulse waveform is obtained by conducting a mathematical analysis. This standard pulse waveform is then used to design the cam profile. Upon fabrication of the cam, the pulsatile simulator, consisting of the pulse pressure generating component, pressure and heart rate adjusting units, and the real-time pulse display, is constructed. Using the RTS, a series of testing was performed on the prototype to collect its pulse waveforms by varying the pressure levels and heart rates. Followed by the testing, the pulse waveforms generated by the prototype are compared with the representative, in vivo, pulse waveform. The radial Augmentation Index analysis results show that the percent error between the simulator data and human pulse profiles is sufficiently small, indicating that the first two peak pressures agree well. Moreover, the phase analysis results show that the phase delay errors between the pulse waveforms of the prototype and the representative waveform are adequately small, confirming that the prototype simulator is capable of simulating realistic human pulse waveforms. This study demonstrated that a very accurate radial pressure waveform can be reproduced using the cam-based simulator. It can be concluded that the same testing and design methods can be used to generate pulse waveforms for other age groups or any target pulse waveforms. Such a simulator can make a contribution to the research efforts, such as development of wearable pressure sensors, standardization of pulse diagnosis in oriental medicine, and training medical professionals for pulse diagnosis techniques.

Journal ArticleDOI
TL;DR: A novel computational approach for estimation of eye fatigue by providing various verifiable models and presented a new scheme to assess eye fatigue of HMDs users by analysis of parameters of the eye tracker.
Abstract: Head-mounted displays (HMDs) and virtual reality (VR) have been frequently used in recent years, and a user’s experience and computation efficiency could be assessed by mounting eye-trackers. However, in addition to visually induced motion sickness (VIMS), eye fatigue has increasingly emerged during and after the viewing experience, highlighting the necessity of quantitatively assessment of the detrimental effects. As no measurement method for the eye fatigue caused by HMDs has been widely accepted, we detected parameters related to optometry test. We proposed a novel computational approach for estimation of eye fatigue by providing various verifiable models. We implemented three classifications and two regressions to investigate different feature sets, which led to present two valid assessment models for eye fatigue by employing blinking features and eye movement features with the ground truth of indicators for optometry test. Three graded results and one continuous result were provided by each model, respectively, which caused the whole result to be repeatable and comparable. We showed differences between VIMS and eye fatigue, and we also presented a new scheme to assess eye fatigue of HMDs users by analysis of parameters of the eye tracker.

Journal ArticleDOI
TL;DR: A novel asymmetric, high-frequency (aHF) waveform for HF-IRE is introduced and the results of a first, small, animal study are presented to test its efficacy and conclude that the use of the aHF enhances the feasibility of theHF-IRE method.
Abstract: Irreversible electroporation (IRE) using direct current (DC) is an effective method for the ablation of cardiac tissue. A major drawback of the use of DC-IRE, however, are two problems: requirement of general anesthesia due to severe muscle contractions and the formation of bubbles containing gaseous products from electrolysis. The use of high-frequency alternating current (HF-IRE) is expected to solve both problems, because HF-IRE produces little to no muscle spasms and does not cause electrolysis. In the present study, we introduce a novel asymmetric, high-frequency (aHF) waveform for HF-IRE and present the results of a first, small, animal study to test its efficacy. The data of the experiments suggest that the aHF waveform creates significantly deeper lesions than a symmetric HF waveform of the same energy and frequency (p = 0.003). We therefore conclude that the use of the aHF enhances the feasibility of the HF-IRE method.

Journal ArticleDOI
TL;DR: This work aims at reviewing the literature on methods for CGM-based automatic attenuation or suspension of basal insulin with a focus on algorithms, their implementation in commercial devices and clinical evidence of their effectiveness and safety.
Abstract: For individuals affected by Type 1 diabetes (T1D), a chronic disease in which the pancreas does not produce any insulin, maintaining the blood glucose (BG) concentration as much as possible within the safety range (70–180 mg/dl) allows avoiding short- and long-term complications. The tuning of exogenous insulin infusion can be difficult, especially because of the inter- and intra-day variability of physiological and behavioral factors. Continuous glucose monitoring (CGM) sensors, which monitor glucose concentration in the subcutaneous tissue almost continuously, allowed improving the detection of critical hypo- and hyper-glycemic episodes. Moreover, their integration with insulin pumps for continuous subcutaneous insulin infusion allowed developing algorithms that automatically tune insulin dosing based on CGM measurements in order to mitigate the incidence of critical episodes. In this work, we aim at reviewing the literature on methods for CGM-based automatic attenuation or suspension of basal insulin with a focus on algorithms, their implementation in commercial devices and clinical evidence of their effectiveness and safety.

Journal ArticleDOI
TL;DR: A novel algorithm to construct dedicated deep-learning neural networks (NNs) that are specialized in detecting newly emerging or aggravating existing cardiac pathology in serial ECGs is presented.
Abstract: Serial electrocardiography aims to contribute to electrocardiogram (ECG) diagnosis by comparing the ECG under consideration with a previously made ECG in the same individual. Here, we present a novel algorithm to construct dedicated deep-learning neural networks (NNs) that are specialized in detecting newly emerging or aggravating existing cardiac pathology in serial ECGs. We developed a novel deep-learning method for serial ECG analysis and tested its performance in detection of heart failure in post-infarction patients, and in the detection of ischemia in patients who underwent elective percutaneous coronary intervention. Core of the method is the repeated structuring and learning procedure that, when fed with 13 serial ECG difference features (intra-individual differences in: QRS duration; QT interval; QRS maximum; T-wave maximum; QRS integral; T-wave integral; QRS complexity; T-wave complexity; ventricular gradient; QRS-T spatial angle; heart rate; J-point amplitude; and T-wave symmetry), dynamically creates a NN of at most three hidden layers. An optimization process reduces the possibility of obtaining an inefficient NN due to adverse initialization. Application of our method to the two clinical ECG databases yielded 3-layer NN architectures, both showing high testing performances (areas under the receiver operating curves were 84% and 83%, respectively). Our method was successful in two different clinical serial ECG applications. Further studies will investigate if other problem-specific NNs can successfully be constructed, and even if it will be possible to construct a universal NN to detect any pathologic ECG change.

Journal ArticleDOI
TL;DR: Three types of low-dimensional physical models of systemic arteries are reviewed, the application of three types of models on estimation of central aortic pressure is taken as an example to discuss their advantages and disadvantages, and the proper choice of models for specific researches and applications are advised.
Abstract: The physiological processes and mechanisms of an arterial system are complex and subtle. Physics-based models have been proven to be a very useful tool to simulate actual physiological behavior of the arteries. The current physics-based models include high-dimensional models (2D and 3D models) and low-dimensional models (0D, 1D and tube-load models). High-dimensional models can describe the local hemodynamic information of arteries in detail. With regard to an exact model of the whole arterial system, a high-dimensional model is computationally impracticable since the complex geometry, viscosity or elastic properties and complex vectorial output need to be provided. For low-dimensional models, the structure, centerline and viscosity or elastic properties only need to be provided. Therefore, low-dimensional modeling with lower computational costs might be a more applicable approach to represent hemodynamic properties of the entire arterial system and these three types of low-dimensional models have been extensively used in the study of cardiovascular dynamics. In recent decades, application of physics-based models to estimate central aortic pressure has attracted increasing interest. However, to our best knowledge, there has been few review paper about reconstruction of central aortic pressure using these physics-based models. In this paper, three types of low-dimensional physical models (0D, 1D and tube-load models) of systemic arteries are reviewed, the application of three types of models on estimation of central aortic pressure is taken as an example to discuss their advantages and disadvantages, and the proper choice of models for specific researches and applications are advised.

Journal ArticleDOI
TL;DR: An automated approach to detect human embryo development stages during incubation and to highlight embryos with abnormal behaviour by focusing on five different stages using the technique of deep learning is proposed.
Abstract: Infertility and subfertility affect a significant proportion of humanity. Assisted reproductive technology has been proven capable of alleviating infertility issues. In vitro fertilisation is one such option whose success is highly dependent on the selection of a high-quality embryo for transfer. This is typically done manually by analysing embryos under a microscope. However, evidence has shown that the success rate of manual selection remains low. The use of new incubators with integrated time-lapse imaging system is providing new possibilities for embryo assessment. As such, we address this problem by proposing an approach based on deep learning for automated embryo quality evaluation through the analysis of time-lapse images. Automatic embryo detection is complicated by the topological changes of a tracked object. Moreover, the algorithm should process a large number of image files of different qualities in a reasonable amount of time. We propose an automated approach to detect human embryo development stages during incubation and to highlight embryos with abnormal behaviour by focusing on five different stages. This method encompasses two major steps. First, the location of an embryo in the image is detected by employing a Haar feature-based cascade classifier and leveraging the radiating lines. Then, a multi-class prediction model is developed to identify a total cell number in the embryo using the technique of deep learning. The experimental results demonstrate that the proposed method achieves an accuracy of at least 90% in the detection of embryo location. The implemented deep learning approach to identify the early stages of embryo development resulted in an overall accuracy of over 92% using the selected architectures of convolutional neural networks. The most problematic stage was the 3-cell stage, presumably due to its short duration during development. This research contributes to the field by proposing a model to automate the monitoring of early-stage human embryo development. Unlike in other imaging fields, only a few published attempts have involved leveraging deep learning in this field. Therefore, the approach presented in this study could be used in the creation of novel algorithms integrated into the assisted reproductive technology used by embryologists.

Journal ArticleDOI
TL;DR: The favorable and unfavorable adaptive alterations of tracheobronchial tree will occur after left upper pulmonary lobectomy, and these alterations can be clarified through CT imaging and CFD analysis.
Abstract: Pulmonary lobectomy has been a well-established curative treatment method for localized lung cancer. After left upper pulmonary lobectomy, the upward displacement of remaining lower lobe causes the distortion or kink of bronchus, which is associated with intractable cough and breathless. However, the quantitative study on structural and functional alterations of the tracheobronchial tree after lobectomy has not been reported. We sought to investigate these alterations using CT imaging analysis and computational fluid dynamics (CFD) method. Both preoperative and postoperative CT images of 18 patients who underwent left upper pulmonary lobectomy are collected. After the tracheobronchial tree models are extracted, the angles between trachea and bronchi, the surface area and volume of the tree, and the cross-sectional area of left lower lobar bronchus are investigated. CFD method is further used to describe the airflow characteristics by the wall pressure, airflow velocity, lobar flow rate, etc. It is found that the angle between the trachea and the right main bronchus increases after operation, but the angle with the left main bronchus decreases. No significant alteration is observed for the surface area or volume of the tree between pre-operation and post-operation. After left upper pulmonary lobectomy, the cross-sectional area of left lower lobar bronchus is reduced for most of the patients (15/18) by 15–75%, especially for 4 patients by more than 50%. The wall pressure, airflow velocity and pressure drop significantly increase after the operation. The flow rate to the right lung increases significantly by 2–30% (but there is no significant difference between each lobe), and the flow rate to the left lung drops accordingly. Many vortices are found in various places with severe distortions. The favorable and unfavorable adaptive alterations of tracheobronchial tree will occur after left upper pulmonary lobectomy, and these alterations can be clarified through CT imaging and CFD analysis. The severe distortions at left lower lobar bronchus might exacerbate postoperative shortness of breath.

Journal ArticleDOI
TL;DR: New parameters characterising corneal deformation, including Corvis Biomechanical Index and biomechanical compensated intraocular pressure, significantly extend the diagnostic capabilities of this device and may be helpful in assessingCorneal diseases of the eye.
Abstract: Non-contact tonometers based on the method using air puff and Scheimpflug’s fast camera are one of the latest devices allowing the measurement of intraocular pressure and additional biomechanical parameters of the cornea. Biomechanical features significantly affect changes in intraocular pressure values, as well as their changes, may indicate the possibility of corneal ectasia. This work presents the latest and already known biomechanical parameters available in the new offered software. The authors focused on their practical application and the diagnostic credibility indicated in the literature. An overview of available literature indicates the importance of new dynamic corneal parameters. The latest parameters developed on the basis of biomechanics analysis of corneal deformation process, available in non-contact tonometers using Scheimpflug’s fast camera, are used in the evaluation of laser refractive surgery procedures, e.g. LASIK procedure. In addition, the assessment of changes in biomechanically corrected intraocular pressure confirms its independence from changes in the corneal biomechanics which may allow an intraocular pressure real assessment. The newly developed Corvis Biomechanical Index combined with the corneal tomography and topography assessment is an important aid in the classification of patients with keratoconus. New parameters characterising corneal deformation, including Corvis Biomechanical Index and biomechanical compensated intraocular pressure, significantly extend the diagnostic capabilities of this device and may be helpful in assessing corneal diseases of the eye. Nevertheless, further research is needed to confirm their diagnostic pertinence.

Journal ArticleDOI
TL;DR: The method of deep learning proposed in this study is more sensitive than other systems in recent years, and the average false positive is lower than that of others.
Abstract: A deep learning computer artificial intelligence system is helpful for early identification of ground glass opacities (GGOs). Images from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) database were used in AlexNet and GoogLeNet to detect pulmonary nodules, and 221 GGO images provided by Xinhua Hospital were used in ResNet50 for detecting GGOs. We used computed tomography image radial reorganization to create the input image of the three-dimensional features, and used the extracted features for deep learning, network training, testing, and analysis. In the final evaluation results, we found that the accuracy of identification of lung nodule could reach 88.0%, with an F-score of 0.891. In terms of performance and accuracy, our method was better than the existing solutions. The GGO nodule classification achieved the best F-score of 0.87805. We propose a preprocessing method of red, green, and blue (RGB) superposition in the region of interest to effectively increase the differentiation between nodules and normal tissues, and that is the innovation of our research. The method of deep learning proposed in this study is more sensitive than other systems in recent years, and the average false positive is lower than that of others.

Journal ArticleDOI
TL;DR: This proof-of-concept study shows that it is possible to automatically sleep score patients with epilepsy based on two-channel subcutaneous EEG, and the results are comparable with the methods currently used in clinical practice.
Abstract: The interplay between sleep structure and seizure probability has previously been studied using electroencephalography (EEG). Combining sleep assessment and detection of epileptic activity in ultralong-term EEG could potentially optimize seizure treatment and sleep quality of patients with epilepsy. However, the current gold standard polysomnography (PSG) limits sleep recording to a few nights. A novel subcutaneous device was developed to record ultralong-term EEG, and has been shown to measure events of clinical relevance for patients with epilepsy. We investigated whether subcutaneous EEG recordings can also be used to automatically assess the sleep architecture of epilepsy patients. Four adult inpatients with probable or definite temporal lobe epilepsy were monitored simultaneously with long-term video scalp EEG (LTV EEG) and subcutaneous EEG. In total, 11 nights with concurrent recordings were obtained. The sleep EEG in the two modalities was scored independently by a trained expert according to the American Academy of Sleep Medicine (AASM) rules. By using the sleep stage labels from the LTV EEG as ground truth, an automatic sleep stage classifier based on 30 descriptive features computed from the subcutaneous EEG was trained and tested. An average Cohen’s kappa of $$\kappa = 0.78\pm 0.02$$ was achieved using patient specific leave-one-night-out cross validation. When merging all sleep stages into a single class and thereby evaluating an awake–sleep classifier, we achieved a sensitivity of 94.8% and a specificity of 96.6%. Compared to manually labeled video-EEG, the model underestimated total sleep time and sleep efficiency by 8.6 and 1.8 min, respectively, and overestimated wakefulness after sleep onset by 13.6 min. This proof-of-concept study shows that it is possible to automatically sleep score patients with epilepsy based on two-channel subcutaneous EEG. The results are comparable with the methods currently used in clinical practice. In contrast to comparable studies with wearable EEG devices, several nights were recorded per patient, allowing for the training of patient specific algorithms that can account for the individual brain dynamics of each patient. Clinical trial registered at ClinicalTrial.gov on 19 October 2016 (ID:NCT02946151).

Journal ArticleDOI
TL;DR: The results suggested that three overlapping genes (FGFBP2, GFOD1 and MLC1) between two modules could be a potential use of gene biomarkers for the diagnose of AMI.
Abstract: Acute myocardial infarction (AMI) is the common cause of mortality in developed countries. The feasibility of whole-genome gene expression analysis to identify outcome-related genes and dysregulated pathways remains unknown. Molecular marker such as BNP, CRP and other serum inflammatory markers have got the notice at this point. However, these biomarkers exhibit elevated levels in patients with thyroid disease, renal failure and congestive heart failure. In this study, three groups of microarray data sets (GES66360, GSE48060, GSE29532) were collected from GEO, a total of 99, 52 and 55 samples, respectively. Weighted gene co-expression network analysis (WGCNA) was performed to obtain a classifier which composed of related genes that best characterize the AMI. Here, this study obtained three groups of microarray data sets (GES66360, GSE48060, GSE29532) on AMI blood samples, a total of 99, 52 and 24 samples, respectively. In all, 4672 genes, 3185 genes, 3660 genes were identified in GSE66360, GSE48060, GSE60993 modules, respectively. We preformed WGCNA, GO and KEGG pathway enrichment analysis on these three data sets, finding function enrichment of the differential expression gene on inflammation and immune response. Transcriptome analysis were performed in AMI patients at four time points compared to CAD patients with no history of MI, to determine gene expression profiles and their possible changes during the recovery from myocardial infarction. The results suggested that three overlapping genes (FGFBP2, GFOD1 and MLC1) between two modules could be a potential use of gene biomarkers for the diagnose of AMI.

Journal ArticleDOI
TL;DR: A hippocampal subfields segmentation method using generative adversarial networks that can achieve the pixel-level classification of brain MR images by building an UG-net model and an adversarial model and training the two models against each other alternately.
Abstract: Segmenting the hippocampal subfields accurately from brain magnetic resonance (MR) images is a challenging task in medical image analysis. Due to the small structural size and the morphological complexity of the hippocampal subfields, the traditional segmentation methods are hard to obtain the ideal segmentation result. In this paper, we proposed a hippocampal subfields segmentation method using generative adversarial networks. The proposed method can achieve the pixel-level classification of brain MR images by building an UG-net model and an adversarial model and training the two models against each other alternately. UG-net extracts local information and retains the interrelationship features between pixels. Moreover, the adversarial training implements spatial consistency among the generated class labels and smoothens the edges of class labels on segmented region. The evaluation has performed on the dataset obtained from center for imaging of neurodegenerative diseases (CIND) for CA1, CA2, DG, CA3, Head, Tail, SUB, ERC and PHG in hippocampal subfields, resulting in the dice similarity coefficient (DSC) of 0.919, 0.648, 0.903, 0.673, 0.929, 0.913, 0.906, 0.884 and 0.889 respectively. For the large subfields, such as Head and CA1 of hippocampus, the DSC was increased by 3.9% and 9.03% than state-of-the-art approaches, while for the smaller subfields, such as ERC and PHG, the segmentation accuracy was significantly increased 20.93% and 16.30% respectively. The results show the improvement in performance of the proposed method, compared with other methods, which include approaches based on multi-atlas, hierarchical multi-atlas, dictionary learning and sparse representation and CNN. In implementation, the proposed method provides better results in hippocampal subfields segmentation.

Journal ArticleDOI
TL;DR: The accuracy with which the phone-based, state-of-the-art eye-tracking algorithm iTracker can distinguish between gaze towards the eyes and the mouth of a face displayed on the smartphone screen is assessed.
Abstract: Avoidance to look others in the eye is a characteristic symptom of Autism Spectrum Disorders (ASD), and it has been hypothesised that quantitative monitoring of gaze patterns could be useful to objectively evaluate treatments. However, tools to measure gaze behaviour on a regular basis at a manageable cost are missing. In this paper, we investigated whether a smartphone-based tool could address this problem. Specifically, we assessed the accuracy with which the phone-based, state-of-the-art eye-tracking algorithm iTracker can distinguish between gaze towards the eyes and the mouth of a face displayed on the smartphone screen. This might allow mobile, longitudinal monitoring of gaze aversion behaviour in ASD patients in the future. We simulated a smartphone application in which subjects were shown an image on the screen and their gaze was analysed using iTracker. We evaluated the accuracy of our set-up across three tasks in a cohort of 17 healthy volunteers. In the first two tasks, subjects were shown different-sized images of a face and asked to alternate their gaze focus between the eyes and the mouth. In the last task, participants were asked to trace out a circle on the screen with their eyes. We confirm that iTracker can recapitulate the true gaze patterns, and capture relative position of gaze correctly, even on a different phone system to what it was trained on. Subject-specific bias can be corrected using an error model informed from the calibration data. We compare two calibration methods and observe that a linear model performs better than a previously proposed support vector regression-based method. Under controlled conditions it is possible to reliably distinguish between gaze towards the eyes and the mouth with a smartphone-based set-up. However, future research will be required to improve the robustness of the system to roll angle of the phone and distance between the user and the screen to allow deployment in a home setting. We conclude that a smartphone-based gaze-monitoring tool provides promising opportunities for more quantitative monitoring of ASD.

Journal ArticleDOI
TL;DR: Stratontium-containing Mg-doped wollastonite (Sr-CSM) scaffolds not only have acceptable compression strength, but also have higher osteogenesis bioactivity, which can be used in bone tissue engineering scaffolds.
Abstract: Bone scaffold is one of the most effective methods to treat bone defect. The ideal scaffold of bone tissue should not only provide space for bone tissue growth, but also have sufficient mechanical strength to support the bone defect area. Moreover, the scaffold should provide a customized size or shape for the patient’s bone defect. In this study, strontium-containing Mg-doped wollastonite (Sr-CSM) bioceramic scaffolds with controllable pore size and pore structure were manufactured by direct ink writing 3D printing. Biological properties of Sr-CSM scaffolds were evaluated by apatite formation ability, in vitro proliferation ability of rabbit bone-marrow stem cells (rBMSCs), and alkaline phosphatase (ALP) activity using β-TCP and Mg-doped wollastonite (CSM) scaffolds as control. The compression strength of three scaffold specimens was probed after completely drying them while been submerged in Tris–HCl solution for 0, 2,4 and 6 weeks. The mechanical test results showed that strontium-containing Mg-doped wollastonite (Sr-CSM) scaffolds had acceptable initial compression strength (56 MPa) and maintained good mechanical stability during degradation in vitro. Biological experiments showed that Sr-CSM scaffolds had a better apatite formation ability. Cell experiments showed that Sr-CSM scaffold had a higher cell proliferation ability compared with β-TCP and CSM scaffold. The higher ALP activity of Sr-CSM scaffold indicates that it can better stimulate osteoblastic differentiation and bone mineralization. Therefore, Sr-CSM scaffolds not only have acceptable compression strength, but also have higher osteogenesis bioactivity, which can be used in bone tissue engineering scaffolds.

Journal ArticleDOI
TL;DR: Treadmill exercise results in smoother joint kinematics and in terms of muscle force, treadmill exercise requires lower loading on knee extensor, yet higher loading on plantar flexor, especially on Gastrocnemius.
Abstract: Treadmill exercise is commonly used as an alternative to over-ground walking or running. Increasing evidence indicated the kinetics of treadmill exercise is different from that of over-ground. Biomechanics of treadmill or over-ground exercises have been investigated in terms of energy consumption, ground reaction force, and surface EMG signals. These indexes cannot accurately characterize the musculoskeletal loading, which directly contributes to tissue injuries. This study aimed to quantify the differences of lower limb joint angles and muscle forces in treadmills and over-ground exercises. 10 healthy volunteers were required to walk at 100 and 120 steps/min and run at 140 and 160 steps/min on treadmill and ground. The joint flexion angles were obtained from the motion capture experiments and were used to calculate the muscle forces with an inverse dynamic method. Hip, knee, and ankle joint motions of treadmill and over-ground conditions were similar in walking, yet different in running. Compared with over-ground running, joint motion ranges in treadmill running were smaller. They were also less affected by stride frequency. Maximum Gastrocnemius force was greater in treadmill walking, yet maximum Rectus femoris and Vastus forces were smaller. Maximum Gastrocnemius and Soleus forces were greater in treadmill running. Treadmill exercise results in smoother joint kinematics. In terms of muscle force, treadmill exercise requires lower loading on knee extensor, yet higher loading on plantar flexor, especially on Gastrocnemius. The findings and the methodology can provide the basis for rehabilitation therapy customization and sophistic treadmill design.

Journal ArticleDOI
TL;DR: Continuous DF with high resolution positioning control, along with the smaller size of the distractor placed in the oral cavity will help in improving the result of the reconstruction operation and leading to a successful DO procedure in a shorter time period.
Abstract: Distraction osteogenesis (DO) is a novel technique widely used in human body reconstruction. DO has got a significant role in maxillofacial reconstruction applications (MRA); through this method, bone defects and skeletal deformities in various cranio-maxillofacial areas could be reconstructed with superior results in comparison to conventional methods. Recent studies revealed in a DO solution, using an automatic continuous distractor could significantly improve the results while decreasing the existing issues. This study is aimed at designing and developing a novel automatic continuous distraction osteogenesis (ACDO) device to be used in the MRA. The design is comprised of a lead screw translation mechanism and a stepper motor, placed outside of the mouth to generate the desired continuous linear force. This externally generated and controlled distraction force (DF) is transferred into the moving bone segment via a flexible miniature transition system. The system is also equipped with an extra-oral ACDO controller, to generate an accurate, reliable, and stable continuous DF. Simulation and experimental results have justified the controller outputs and the desired accuracy of the device. Experiments have been conducted on a sheep jaw bone and results have showed that the developed device could offer a continuous DF of 38 N with distraction accuracy of 7.6 nm on the bone segment, while reducing the distraction time span. Continuous DF with high resolution positioning control, along with the smaller size of the distractor placed in the oral cavity will help in improving the result of the reconstruction operation and leading to a successful DO procedure in a shorter time period. The developed ACDO device has less than 1% positioning error while generating sufficient DF. These features make this device a suitable distractor for an enhanced DO treatment in MRA.

Journal ArticleDOI
TL;DR: This model for spatiotemporal gait variables of patients with PD is the first to be developed through an accurate EFA and confirmed by CFA and shows an excellent fit.
Abstract: Gait impairment is a risk factor for falls in patients with Parkinson’s disease (PD). Gait can be conveniently assessed by electronic walkways, but there is need to select which spatiotemporal gait variables are useful for assessing gait in PD. Existing models for gait variables developed in healthy subjects and patients with PD show some methodological shortcomings in their validation through exploratory factor analysis (EFA), and were never confirmed by confirmatory factor analysis (CFA). The aims of this study were (1) to create a new model of gait for PD through EFA, (2) to analyze the factorial structure of our new model and compare it with existing models through CFA. From the 37 variables initially considered in 250 patients with PD, 10 did not show good-to-excellent reliability and were eliminated, while further 19 were eliminated after correlation matrix and Kaiser–Meyer–Olkin measure. The remaining eight variables underwent EFA and three factors emerged: pace/rhythm, variability, and asymmetry. Structural validity of our new model was then examined with CFA, using the structural equation modeling. After some modifications, suggested by the Modification Indices, we obtained a final model that showed an excellent fit. In contrast, when the structure of previous models of gait was analyzed, no model achieved convergence with our sample of patients. Our model for spatiotemporal gait variables of patients with PD is the first to be developed through an accurate EFA and confirmed by CFA. It contains eight gait variables divided into three factors and shows an excellent fit. Reasons for the non-convergence of other models could be their inclusion of highly inter-correlated or low-reliability variables or could be possibly due to the fact that they did not use more recent methods for determining the number of factors to extract.

Journal ArticleDOI
Meng Dai1, Shuying Li1, Yuanyuan Wang1, Qi Zhang2, Jinhua Yu1 
TL;DR: The proposed and validated post-processing method combined with deep learning to improve the imaging quality of UCPWI illustrates superior imaging performance and high reproducibility, and thus is promising in improving the contrast image quality and the clinical value of U CPWI.
Abstract: Improving imaging quality is a fundamental problem in ultrasound contrast agent imaging (UCAI) research. Plane wave imaging (PWI) has been deemed as a potential method for UCAI due to its’ high frame rate and low mechanical index. High frame rate can improve the temporal resolution of UCAI. Meanwhile, low mechanical index is essential to UCAI since microbubbles can be easily broken under high mechanical index conditions. However, the clinical practice of ultrasound contrast agent plane wave imaging (UCPWI) is still limited by poor imaging quality for lack of transmit focus. The purpose of this study was to propose and validate a new post-processing method that combined with deep learning to improve the imaging quality of UCPWI. The proposed method consists of three stages: (1) first, a deep learning approach based on U-net was trained to differentiate the microbubble and tissue radio frequency (RF) signals; (2) then, to eliminate the remaining tissue RF signals, the bubble approximated wavelet transform (BAWT) combined with maximum eigenvalue threshold was employed. BAWT can enhance the UCA area brightness, and eigenvalue threshold can be set to eliminate the interference areas due to the large difference of maximum eigenvalue between UCA and tissue areas; (3) finally, the accurate microbubble imaging were obtained through eigenspace-based minimum variance (ESBMV). The proposed method was validated by both phantom and in vivo rabbit experiment results. Compared with UCPWI based on delay and sum (DAS), the imaging contrast-to-tissue ratio (CTR) and contrast-to-noise ratio (CNR) was improved by 21.3 dB and 10.4 dB in the phantom experiment, and the corresponding improvements were 22.3 dB and 42.8 dB in the rabbit experiment. Our method illustrates superior imaging performance and high reproducibility, and thus is promising in improving the contrast image quality and the clinical value of UCPWI.