scispace - formally typeset
Search or ask a question
Author

Yuxin Wang

Other affiliations: University of Michigan
Bio: Yuxin Wang is an academic researcher from Nanjing University. The author has contributed to research in topics: Ultrasonic sensor & Breast ultrasound. The author has an hindex of 2, co-authored 6 publications receiving 92 citations. Previous affiliations of Yuxin Wang include University of Michigan.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper proposes to use convolutional neural networks (CNNs) for segmenting breast ultrasound images into four major tissues: skin, fibroglandular tissue, mass, and fatty tissue, on three‐dimensional (3D) Breast ultrasound images.

151 citations

Journal ArticleDOI
TL;DR: In this paper, an adaptive multi-sample-based approach is proposed to enhance the SNR of photoacoustic (PA) signals and in addition, detailed information in rebuilt PA images that used to be buried in the noise can be distinguished.
Abstract: The energy of light exposed on human skin is compulsively limited for safety reasons which affects the power of photoacoustic (PA) signal and its signal-to-noise ratio (SNR) level. Thus, the final reconstructed PA image quality is degraded. This Letter proposes an adaptive multi-sample-based approach to enhance the SNR of PA signals and in addition, detailed information in rebuilt PA images that used to be buried in the noise can be distinguished. Both ex vivo and in vivo experiments are conducted to validate the effectiveness of our proposed method which provides its potential value in clinical trials. OCIS codes: 100.2980, 170.1065, 170.5120. doi: 10.3788/COL201513.061001. Research on photoacoustic tomography (PAT) got prosperous development for its being promisingly characterized with noninvasive and nonionizing diagnose of breast cancer, arthritis, and relevant disease. PAT combines the metrics of both ultrasound imaging and pure optical imaging technique, providing high ultrasonic resolution and high optical contrast images [1–5] . Due to the peculiarity that the optical absorption characteristic of blood has a strong relationship with the hemoglobin content, functional imaging as well as structural imaging can also be realized by PAT, making this imaging modality extremely potential in clinical application [6] . The basic principle of PAT is that a tissue is irradiated with short nanosecond laser pulses, and then the absorbed energy may result in a thermo-elastic expansion and subsequent contraction of irradiated volume that generates time-trace photoacoustic (PA) waves, which can be acquired by scanning small-aperture ultrasound detectors over a surface that encloses the source under study. The recorded PA wave can then be reconstructed to spatially resolve the initial absorber distribution and concentration via PA reconstruction algorithms [7–9]. However, biomedical tissue is a highly scattering medium for electromagnetic waves in the optical spectral range, and the propagation ultrasound waves are extremely attenuated before received by ultrasound sensors. Furthermore, the dose of laser beam exposed on the biomedical tissue has to be limited under 20 mJ∕cm 2 for safety operation. Thus, in clinical trials, ultrasound transducer can only receive weak PA signals with low signal-to-noise ratio (SNR) which degraded the final reconstructed PA images [10–14] . This Letter proposes an adaptive multi-sample-based approach to enhance the SNR of PA signals and in addition, detailed information of PAT that used to be buried in the noise and artifacts can be distinguished. The PA reconstruction is an inverse problem of the source pressure. We assume that a tissue with inhomogeneous microwave absorption but a relatively homogeneous acoustic property and the heat diffusion’s effect on the thermoacoustic wave can be ignored. For cases where the scanning radius in a circular scan configuration is much greater than the PA wavelengths, the optical absorption p0ðrÞ within the sample at a given position r is given as [1]

3 citations

Patent
13 Apr 2018
TL;DR: In this article, a method for extracting and fusing a two-dimensional image from 3D tomography was proposed, which comprises steps that an X-ray tomography apparatus is utilized to emitX-ray signals to the target three-dimensional space, the computer tomography is used to reconstruct a target 3D space image; an ultrasonic sensor is utilized for detecting the target 3d space, and signal acquisition and reconstruction of images in the present detection plane are carried out; a displacement sensor and an angle sensor are utilized to detect the position and angle information of the ultr
Abstract: The invention discloses a method for extracting and fusing a two-dimensional image from three-dimensional tomography. The method comprises steps that an X-ray tomography apparatus is utilized to emitX-ray signals to the target three-dimensional space, the computer tomography is utilized to reconstruct a target three-dimensional space image; an ultrasonic sensor is utilized to detect the target three-dimensional space, and signal acquisition and reconstruction of images in the present detection plane are carried out; a displacement sensor and an angle sensor are utilized to detect the positionand angle information of the ultrasonic sensor in the three-dimensional space; in combination with the position and angle information of the ultrasonic sensor, a two-dimensional image of the presentdetection plane is extracted from the three-dimensional X-ray tomography image and is fused with an ultrasonic image, through integrating the X-ray and ultrasonic detection means, the detection imagewith higher precision is acquired.

2 citations

Proceedings ArticleDOI
TL;DR: An automated algorithm to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer is proposed.
Abstract: Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

1 citations

Journal ArticleDOI
TL;DR: In this article , the authors evaluated the effect of low radiation dose on CT image quality and lesion diagnostic confidence using a 5-point Likert scale, and compared three levels of DLIR and AISR-V on low-dose radiation abdominal CT images.
Abstract: Background The image quality of computed tomography (CT) can be adversely affected by a low radiation dose, and reconstruction algorithms of an appropriate level may be useful in reducing this impact. Methods Eight sets of CT images of a phantom were reconstructed with filtered back projection (FBP); adaptive statistical iterative reconstruction-Veo (ASiR-V) at 30% (AV-30), 50% (AV-50), 80% (AV-80), and 100% (AV-100); and deep learning image reconstruction (DLIR) at low (DL-L), medium (DL-M), and high (DL-H) levels. The noise power spectrum (NPS) and task transfer function (TTF) were measured. Thirty consecutive patients underwent low-dose radiation contrast-enhanced abdominal CT scans that were reconstructed using FBP, AV-30, AV-50, AV-80, and AV-100, and three levels of DLIR. The standard deviation (SD), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) of the hepatic parenchyma and paraspinal muscle were evaluated. Two radiologists assessed the subjective image quality and lesion diagnostic confidence using a 5-point Likert scale. Results In the phantom study, both a higher DLIR and ASiR-V strength and a higher radiation dose led less noise. The NPS peak and average spatial frequency of the DLIR algorithms were closer to those of FBP, as the tube current increased and declined as the level of ASiR-V and DLIR strengthened. The NPS average spatial frequency of DL-L were higher than those of AISR-V. In clinical studies, AV-30 demonstrated a higher SD and lower SNR and CNR compared to DL-M and DL-H (P<0.05). For qualitative assessment, DL-M produced the highest qualitative image quality scores, with the exception of overall image noise (P<0.05). The NPS peak, average spatial frequency, and SD were the highest and the SNR, CNR, and subjective scores were the lowest with FBP. Conclusions Compared with FBP and ASiR-V, DLIR provided better image quality and noise texture both in the phantom and clinical studies, and DL-M maintained the best image quality and lesion diagnostic confidence in low-dose radiation abdominal CT.

1 citations


Cited by
More filters
Journal ArticleDOI
15 Jul 2016
TL;DR: The conclusion of the current study was that the frequency of screening might be dependent on breast density and in such cases diagnostic techniques such as “digital mammography, ultra sonography and magnetic resonance imaging” may prove to be better detection tools.
Abstract: With the increase in breast cancer risk over the years, there are many factors estimated that lead to it. However, till date which factor is majorly involved in development of breast cancer or which factor accounts more is not clearly evident. Mammography technique accounting for 80-90% of cancer being detected is believed to be the best method of detection. While mammographic density is manifested by increased proliferation of fat, stoma, epithelium and connective tissue, it is considered to be a risk factor for development of breast cancer. The current study was thus conducted to find out whether the mammographic density is actually a risk factor for development of breast cancer and to find out the better detection tool available. For this, the methodology adopted was review of various journals and studies already published with respect to mammographic density and its risk on development of breast cancer. The conclusion of the current study as well as from another comparable study was that the frequency of screening might be dependent on breast density and in such cases diagnostic techniques such as “digital mammography, ultra sonography and magnetic resonance imaging” may prove to be better detection tools. Moreover, recent studies have also suggested that mammographic density as a marker for risk of developing breast cancer holds true however, this fact needs to be evaluated further. Article DOI: https://dx.doi.org/10.20319/lijhls.2016.22.4854 This work is licensed under the Creative Commons Attribution-Non-commercial 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

317 citations

Journal ArticleDOI
TL;DR: In this review, the basics of deep learning methods are discussed along with an overview of successful implementations involving image segmentation for different medical applications and the future need for further improvements is pointed out.

227 citations

Journal ArticleDOI
TL;DR: This study aims at presenting a review that shows the new applications of machine learning and deep learning technology for detecting and classifying breast cancer and provides an overview of progress and the future trends and challenges in the classification and detection of breast cancer.
Abstract: Breast cancer is the second leading cause of death for women, so accurate early detection can help decrease breast cancer mortality rates. Computer-aided detection allows radiologists to detect abnormalities efficiently. Medical images are sources of information relevant to the detection and diagnosis of various diseases and abnormalities. Several modalities allow radiologists to study the internal structure, and these modalities have been met with great interest in several types of research. In some medical fields, each of these modalities is of considerable significance. This study aims at presenting a review that shows the new applications of machine learning and deep learning technology for detecting and classifying breast cancer and provides an overview of progress in this area. This review reflects on the classification of breast cancer utilizing multi-modalities medical imaging. Details are also given on techniques developed to facilitate the classification of tumors, non-tumors, and dense masses in various medical imaging modalities. It first provides an overview of the different approaches to machine learning, then an overview of the different deep learning techniques and specific architectures for the detection and classification of breast cancer. We also provide a brief overview of the different image modalities to give a complete overview of the area. In the same context, this review was performed using a broad variety of research databases as a source of information for access to various field publications. Finally, this review summarizes the future trends and challenges in the classification and detection of breast cancer.

164 citations

Journal ArticleDOI
TL;DR: The concept of the matching layer is generalizable and can be used to improve the overall performance of the transfer learning techniques using deep convolutional neural networks.
Abstract: Purpose We propose a deep learning-based approach to breast mass classification in sonography and compare it with the assessment of four experienced radiologists employing breast imaging reporting and data system 4th edition lexicon and assessment protocol. Methods Several transfer learning techniques are employed to develop classifiers based on a set of 882 ultrasound images of breast masses. Additionally, we introduce the concept of a matching layer. The aim of this layer is to rescale pixel intensities of the grayscale ultrasound images and convert those images to red, green, blue (RGB) to more efficiently utilize the discriminative power of the convolutional neural network pretrained on the ImageNet dataset. We present how this conversion can be determined during fine-tuning using back-propagation. Next, we compare the performance of the transfer learning techniques with and without the color conversion. To show the usefulness of our approach, we additionally evaluate it using two publicly available datasets. Results Color conversion increased the areas under the receiver operating curve for each transfer learning method. For the better-performing approach utilizing the fine-tuning and the matching layer, the area under the curve was equal to 0.936 on a test set of 150 cases. The areas under the curves for the radiologists reading the same set of cases ranged from 0.806 to 0.882. In the case of the two separate datasets, utilizing the proposed approach we achieved areas under the curve of around 0.890. Conclusions The concept of the matching layer is generalizable and can be used to improve the overall performance of the transfer learning techniques using deep convolutional neural networks. When fully developed as a clinical tool, the methods proposed in this paper have the potential to help radiologists with breast mass classification in ultrasound.

150 citations

Journal ArticleDOI
TL;DR: This review article highlights the imperative role of machine learning algorithms in enabling efficient and accurate segmentation in the field of medical imaging and discusses several challenges related to the training of different machine learning models, and presents some heuristics to address those challenges.
Abstract: In recent years, significant progress has been made in developing more accurate and efficient machine learning algorithms for segmentation of medical and natural images. In this review article, we highlight the imperative role of machine learning algorithms in enabling efficient and accurate segmentation in the field of medical imaging. We specifically focus on several key studies pertaining to the application of machine learning methods to biomedical image segmentation. We review classical machine learning algorithms such as Markov random fields, k-means clustering, random forest, etc. Although such classical learning models are often less accurate compared to the deep learning techniques, they are often more sample efficient and have a less complex structure. We also review different deep learning architectures, such as the artificial neural networks (ANNs), the convolutional neural networks (CNNs), and the recurrent neural networks (RNNs), and present the segmentation results attained by those learning models that were published in the past three years. We highlight the successes and limitations of each machine learning paradigm. In addition, we discuss several challenges related to the training of different machine learning models, and we present some heuristics to address those challenges.

109 citations