scispace - formally typeset
Search or ask a question
Author

Ping He

Bio: Ping He is an academic researcher from Tongji University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 2, co-authored 2 publications receiving 6 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The hypothesis that the development of fetal lung maturation degree can be represented by the texture information from ultrasound images has been preliminarily validated and can be considered by the deep model’s output denoted by the estimated gestational age.
Abstract: The evaluation of fetal lung maturity is critical for clinical practice since the lung immaturity is an important cause of neonatal morbidity and mortality. For the evaluation of the development of fetal lung maturation degree, our study established a deep model from ultrasound images of four-cardiac-chamber view plane. A two-stage transfer learning approach is proposed for the purpose of the study. A specific U-net structure is designed for the applied deep model. In the first stage, the model is to first learn the recognition of fetal lung region in the ultrasound images. It is hypothesized in our study that the development of fetal lung maturation degree is generally proportional to the gestational age. Then, in the second stage, the pretrained deep model is trained to accurately estimate the gestational age from the fetal lung region of ultrasound images. Totally 332 patients were included in our study, while the first 206 patients were used for training and the subsequent 126 patients were used for the independent testing. The testing results of the established deep model have the imprecision as 1.56 ± 2.17 weeks on the gestational age estimation. Its correlation coefficient with the ground truth of gestational age achieves 0.7624 (95% CI 0.6779 to 0.8270, P value < 0.00001). The hypothesis that the development of fetal lung maturation degree can be represented by the texture information from ultrasound images has been preliminarily validated. The fetal lung maturation degree can be considered as being represented by the deep model’s output denoted by the estimated gestational age.

10 citations

Journal ArticleDOI
TL;DR: In this paper , a deep neural network is used to synthesize virtual elastography ultrasound (V-EUS) from conventional B-mode images, which can be used for differentiating benign and malignant breast tumors.
Abstract: Elastography ultrasound (EUS) imaging is a vital ultrasound imaging modality. The current use of EUS faces many challenges, such as vulnerability to subjective manipulation, echo signal attenuation, and unknown risks of elastic pressure in certain delicate tissues. The hardware requirement of EUS also hinders the trend of miniaturization of ultrasound equipment. Here we show a cost-efficient solution by designing a deep neural network to synthesize virtual EUS (V-EUS) from conventional B-mode images. A total of 4580 breast tumor cases were collected from 15 medical centers, including a main cohort with 2501 cases for model establishment, an external dataset with 1730 cases and a portable dataset with 349 cases for testing. In the task of differentiating benign and malignant breast tumors, there is no significant difference between V-EUS and real EUS on high-end ultrasound, while the diagnostic performance of pocket-sized ultrasound can be improved by about 5% after V-EUS is equipped.

4 citations

Journal ArticleDOI
TL;DR: A hierarchical model is proposed to automatically detect the standard sagittal-view plane based on 3D ultrasound data using Hessian-matrix based filtering for plate-structure distribution and the sampling-based Hough transform for plane detection.

2 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper interpreted the cascade-connected neural network (CCNN) as the discrete approximation of ordinary differential equations and proposed a cascade-refine model, which takes advantage of CCNNs and makes it possible to overcome the limitations of number and depth by sharing parameters among stacked network backbones.
Abstract: Landmark detection has been well developed by deep learning methods, and the cascade-connected neural network (CCNN) stands out as a widely used deep learning landmark detection method. CCNNs consist of several stacked network backbones, where the predictions of the previous network backbone are used as the input of the following one. Due to GPU memory bottlenecks, CCNNs have two limitations. First, the network backbones of CCNNs have limited numbers and depths; thus, the learning ability of CCNNs is limited. Second, CCNNs are usually trained in low-resolution images. However, the neighboring pixels in high-resolution images are usually vital for landmark detection, especially for cephalometric landmark detection. This paper interprets CCNNs as the discrete approximation of ordinary differential equations. Relying on this explanation, we further propose a novel model, called the cascade-refine model, which takes advantage of CCNNs and makes it possible to overcome the limitations of number and depth by sharing parameters among stacked network backbones. Moreover, the proposed model obeys the rule of coarse-to-fine architectures, where a global module is used to generate the coarse landmark locations, and a local module is adopted to tune the pixel error of latent landmark locations in the region of interest. The proposed model is trained in an end-to-end manner. Thus, the neighboring detailed high-resolution pixels are directly exploited for cephalometric landmark detection. The proposed cascade-refine model outperforms state-of-the-art methods on the public Automatic Cephalometric X-ray Landmark Detection Challenge 2015 dataset. We also built two private cephalometric X-ray datasets, and experimental results on both datasets demonstrate the good performance of the proposed model.
Journal ArticleDOI
TL;DR: This collaborative initiative of about 20 centers proficient in gynecological ultrasound imaging is creating a standardized ovarian ultrasound image database that will cover the majority of the spectrum of ovarian lesions in China.
Abstract: Background : At present there is no large, multi-center and standardized database of ovarian ultrasound images for teaching and research in China. Methods : A standardized ovarian ultrasound image database is being created in a collaborative initiative of about 20 centers proficient in gynecological ultrasound imaging. The database will include both adults and children in China. Results : Each center will provide cases that meet the submission requirements, including standard normal cases (SNC), standard abnormal cases (SAC) and historical classic cases (HCC). This database will cover the majority of the spectrum of ovarian lesions in China. Conclusions : This comprehensive database of ovarian lesions will be a valuable resource for diagnosis and education.

Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a comprehensive review of transfer learning in medical image analysis is presented, including the structure of CNN, background knowledge, different types of strategies performing transfer learning, different sub-fields of analysis, and discussion on the future prospect for transfer learning.
Abstract: Compared with common deep learning methods (e.g., convolutional neural networks), transfer learning is characterized by simplicity, efficiency and its low training cost, breaking the curse of small datasets. Medical image analysis plays an indispensable role in both scientific research and clinical diagnosis. Common medical image acquisition methods include Computer Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), X-Ray, etc. Although these medical imaging methods can be applied for non-invasive qualitative and quantitative analysis of patients—compared with image datasets in other computer vision fields such like faces—medical images, especially its labeling, is still scarce and insufficient. Therefore, more and more researchers adopted transfer learning for medical image processing. In this study, after reviewing one hundred representative papers from IEEE, Elsevier, Google Scholar, Web of Science and various sources published from 2000 to 2020, a comprehensive review is presented, including (i) structure of CNN, (ii) background knowledge of transfer learning, (iii) different types of strategies performing transfer learning, (iv) application of transfer learning in various sub-fields of medical image analysis, and (v) discussion on the future prospect of transfer learning in the field of medical image analysis. Through this review paper, beginners could receive an overall and systematic knowledge of transfer learning application in medical image analysis. And policymaker of related realm will benefit from the summary of the trend of transfer learning in medical imaging field and may be encouraged to make policy positive to the future development of transfer learning in the field of medical image analysis.

116 citations

Journal ArticleDOI
TL;DR: A model is proposed to turn the sagittal plane detection problem into a symmetry plane and axis searching problem and the deep belief network (DBN) and a modified circle detection method provide prior knowledge for the searching.
Abstract: Fetal nuchal translucency (NT) thickness is one of the most important parameters in prenatal screening. Locating the mid-sagittal plane is one of the key points to measure NT. In this paper, an automatic method for the sagittal plane detection using 3-D ultrasound data is proposed. To avoid unnecessary massive searching and the corresponding huge computation load, a model is proposed to turn the sagittal plane detection problem into a symmetry plane and axis searching problem. The deep belief network (DBN) and a modified circle detection method provide prior knowledge for the searching. The experiments show that in most cases, the result plane has small distance error and angle error at the same time—88.6% of the result planes have a distance error less than 4 mm and 71.0% have angle error less than 20°.

21 citations

Journal ArticleDOI
TL;DR: A detailed survey of the most recent work in the field can be found in this paper , with a total of 145 research papers published after 2017 and each paper is analyzed and commented on from both the methodology and application perspective.

15 citations

Journal ArticleDOI
17 Jan 2022-iScience
TL;DR: Kim et al. as discussed by the authors conducted a comprehensive search of eight bibliographic databases and found that 2D ultrasound images were more popular than 3D and 4D images, followed by segmentation, classification integrated with segmentation and other miscellaneous methods such as object detection, regression, and reinforcement learning.

8 citations

Journal ArticleDOI
TL;DR: In this paper, the performance using quantitative high-throughput sonographic feature analysis was compared with that using qualitative feature assessment, which is better than two-dimensional feature assessment in predicting tumor biological properties.
Abstract: Sonographic features are associated with pathological and immunohistochemical characteristics of triple-negative breast cancer (TNBC). To predict the biological property of TNBC, the performance using quantitative high-throughput sonographic feature analysis was compared with that using qualitative feature assessment. We retrospectively reviewed ultrasound images, clinical, pathological, and immunohistochemical (IHC) data of 252 female TNBC patients. All patients were subgrouped according to the histological grade, Ki67 expression level, and human epidermal growth factor receptor 2 (HER2) score. Qualitative sonographic feature assessment included shape, margin, posterior acoustic pattern, and calcification referring to the Breast Imaging Reporting and Data System (BI-RADS). Quantitative sonographic features were acquired based on the computer-aided radiomics analysis. Breast cancer masses were manually segmented from the surrounding breast tissues. For each ultrasound image, 1688 radiomics features of 7 feature classes were extracted. The principal component analysis (PCA), least absolute shrinkage and selection operator (LASSO), and support vector machine (SVM) were used to determine the high-throughput radiomics features that were highly correlated to biological properties. The performance using both quantitative and qualitative sonographic features to predict biological properties of TNBC was represented by the area under the receiver operating characteristic curve (AUC). In the qualitative assessment, regular tumor shape, no angular or spiculated margin, posterior acoustic enhancement, and no calcification were used as the independent sonographic features for TNBC. Using the combination of these four features to predict the histological grade, Ki67, HER2, axillary lymph node metastasis (ALNM), and lymphovascular invasion (LVI), the AUC was 0.673, 0.680, 0.651, 0.587, and 0.566, respectively. The number of high-throughput features that closely correlated with biological properties was 34 for histological grade (AUC 0.942), 27 for Ki67 (AUC 0.732), 25 for HER2 (AUC 0.730), 34 for ALNM (AUC 0.804), and 34 for LVI (AUC 0.795). High-throughput quantitative sonographic features are superior to traditional qualitative ultrasound features in predicting the biological behavior of TNBC. • Sonographic appearances of TNBCs showed a great variety in accordance with its biological and clinical characteristics. • Both qualitative and quantitative sonographic features of TNBCs are associated with tumor biological characteristics. • The quantitative high-throughput feature analysis is superior to two-dimensional sonographic feature assessment in predicting tumor biological property.

4 citations