scispace - formally typeset
Search or ask a question
Author

Yong Pi

Bio: Yong Pi is an academic researcher from Sichuan University. The author has contributed to research in topics: Computer science & Artificial intelligence. The author has an hindex of 4, co-authored 8 publications receiving 95 citations.

Papers
More filters
Journal ArticleDOI
Xiaofeng Qi1, Lei Zhang1, Yao Chen1, Yong Pi1, Chen Yi1, Qing Lv1, Zhang Yi1 
TL;DR: An automated breast cancer diagnosis model for ultrasonography images using deep convolutional neural networks with multi‐scale kernels and skip connections is developed and achieves a performance comparable to human sonographers and can be applied to clinical scenarios.

148 citations

Journal ArticleDOI
Yong Pi1, Zhen Zhao1, Yongzhao Xiang1, Yuhao Li1, Huawei Cai1, Zhang Yi1 
TL;DR: The development of a deep convolutional neural network to determine the absence or presence of bone metastasis is described, consisting of three sub-networks that aim to extract, aggregate, and classify high-level features in a data-driven manner.

40 citations

Journal ArticleDOI
Yong Pi1, Yao Chen1, Dan Deng, Xiaofeng Qi1, Jilan Li1, Qing Lv1, Zhang Yi1 
TL;DR: This paper formulate the diagnosis of breast cancer on ultrasonography images as a Multiple Instance Learning (MIL) problem, diagnosing a breast nodule by jointly analyzing the nodule on multiple planes and develops an attention-augmented deep neural network to solve this problem.

13 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper developed an automated breast cancer diagnosis system based on stacked denoising autoencoders and generative adversarial networks, which is deployed on mobile phones, takes a photo of the ultrasound report as input and performs diagnosis on each image.

8 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed an artificial intelligence (AI) guided identification of suspicious bone metastatic lesions from the whole-body bone scintigraphy (WBS) images by convolutional neural networks (CNNs).
Abstract: Background We aimed to construct an artificial intelligence (AI) guided identification of suspicious bone metastatic lesions from the whole-body bone scintigraphy (WBS) images by convolutional neural networks (CNNs). Methods We retrospectively collected the 99mTc-MDP WBS images with confirmed bone lesions from 3352 patients with malignancy. 14,972 bone lesions were delineated manually by physicians and annotated as benign and malignant. The lesion-based differentiating performance of the proposed network was evaluated by fivefold cross validation, and compared with the other three popular CNN architectures for medical imaging. The average sensitivity, specificity, accuracy and the area under receiver operating characteristic curve (AUC) were calculated. To delve the outcomes of this study, we conducted subgroup analyses, including lesion burden number and tumor type for the classifying ability of the CNN. Results In the fivefold cross validation, our proposed network reached the best average accuracy (81.23%) in identifying suspicious bone lesions compared with InceptionV3 (80.61%), VGG16 (81.13%) and DenseNet169 (76.71%). Additionally, the CNN model's lesion-based average sensitivity and specificity were 81.30% and 81.14%, respectively. Based on the lesion burden numbers of each image, the area under the receiver operating characteristic curve (AUC) was 0.847 in the few group (lesion number n ≤ 3), 0.838 in the medium group (n = 4-6), and 0.862 in the extensive group (n > 6). For the three major primary tumor types, the CNN-based lesion identifying AUC value was 0.870 for lung cancer, 0.900 for prostate cancer, and 0.899 for breast cancer. Conclusion The CNN model suggests potential in identifying suspicious benign and malignant bone lesions from whole-body bone scintigraphic images.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This study aims at presenting a review that shows the new applications of machine learning and deep learning technology for detecting and classifying breast cancer and provides an overview of progress and the future trends and challenges in the classification and detection of breast cancer.
Abstract: Breast cancer is the second leading cause of death for women, so accurate early detection can help decrease breast cancer mortality rates. Computer-aided detection allows radiologists to detect abnormalities efficiently. Medical images are sources of information relevant to the detection and diagnosis of various diseases and abnormalities. Several modalities allow radiologists to study the internal structure, and these modalities have been met with great interest in several types of research. In some medical fields, each of these modalities is of considerable significance. This study aims at presenting a review that shows the new applications of machine learning and deep learning technology for detecting and classifying breast cancer and provides an overview of progress in this area. This review reflects on the classification of breast cancer utilizing multi-modalities medical imaging. Details are also given on techniques developed to facilitate the classification of tumors, non-tumors, and dense masses in various medical imaging modalities. It first provides an overview of the different approaches to machine learning, then an overview of the different deep learning techniques and specific architectures for the detection and classification of breast cancer. We also provide a brief overview of the different image modalities to give a complete overview of the area. In the same context, this review was performed using a broad variety of research databases as a source of information for access to various field publications. Finally, this review summarizes the future trends and challenges in the classification and detection of breast cancer.

164 citations

Journal ArticleDOI
TL;DR: A novel method to segment the breast tumor via semantic classification and merging patches and achieved competitive results compared to conventional methods in terms of TP and FP, and produced good approximations to the hand-labelled tumor contours.

135 citations

Journal ArticleDOI
TL;DR: A selective kernel (SK) U-Net convolutional neural network to adjust network’s receptive fields via an attention mechanism, and fuse feature maps extracted with dilated and conventional convolutions for breast mass segmentation in ultrasound (US).

99 citations

Journal ArticleDOI
TL;DR: An overview of explainable artificial intelligence (XAI) used in deep learning-based medical image analysis can be found in this article , where a framework of XAI criteria is introduced to classify deep learning based medical image classification methods.

94 citations

Posted Content
TL;DR: An overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis is presented in this paper, where a framework of XAI criteria is introduced to classify deep learning based methods.
Abstract: With an increase in deep learning-based methods, the call for explainability of such methods grows, especially in high-stakes decision making areas such as medical image analysis. This survey presents an overview of eXplainable Artificial Intelligence (XAI) used in deep learning-based medical image analysis. A framework of XAI criteria is introduced to classify deep learning-based medical image analysis methods. Papers on XAI techniques in medical image analysis are then surveyed and categorized according to the framework and according to anatomical location. The paper concludes with an outlook of future opportunities for XAI in medical image analysis.

92 citations